Categories
Canada Engineering England Scholarship/Erudition TechnoBLOTT Will there be a buffet?

The Godfather of Artificial Intelligence

STOCKHOLM (AP) — Two pioneers of artificial intelligence — John Hopfield and Geoffrey Hinton — won the Nobel Prize in physics Tuesday for helping create the building blocks of machine learning that is revolutionizing the way we work and live but also creates new threats for humanity.

Hinton, who is known as the godfather of artificial intelligence, is a citizen of Canada and Britain who works at the University of Toronto, and Hopfield is an American working at Princeton.

MSN

Geoffrey Everest Hinton (born 6 December 1947) is a British-Canadian computer scientist, cognitive scientist, psychologist, Turing Award recipient, and Nobel Prize laureate in Physics most noted for his work on artificial neural networks, which has earned him the title as the “Godfather of AI”.

In May 2023, Hinton announced his resignation from Google to be able to “freely speak out about the risks of A.I.” He has voiced concerns about deliberate misuse by malicious actors, technological unemployment, and existential risk from artificial general intelligence. He noted that establishing safety guidelines will require cooperation among those competing in use of AI in order to avoid the worst outcomes.

In May 2023, Hinton publicly announced his resignation from Google. He explained his decision by saying that he wanted to “freely speak out about the risks of A.I.” and added that a part of him now regrets his life’s work.

[I]n a March 2023 interview with CBS, he stated that “general-purpose AI” may be fewer than 20 years away and could bring about changes “comparable in scale with the industrial revolution or electricity.”

Wikipedia / Photo by Ramsey Cardy / Collision via Sportsfile

Prof. Hinton may or may not be concerned of the rise of the gangster AI chicken – something that worries the Elders of WLBOTT a great deal.


Clifton College

Prof. Hinton attended Clifton College (a high school) in Bristol in South West England.

Clifton College is a public school in the city of Bristol in South West England, founded in 1862 and offering both boarding and day school for pupils aged 13–18. In its early years, unlike most contemporary public schools, it emphasised science rather than classics in the curriculum, and was less concerned with social elitism, for example by admitting day-boys on equal terms and providing a dedicated boarding house for Jewish boys, called Polack’s House.

Motto Latin: Spiritus Intus Alit / The spirit nourishes within

Wikipedia

Clifton College is also the alma mater (a Latin phrase that means “nourishing mother”) of John Cleese.

(not John Cleese)

The Clifton College Dining Hall

The dining hall also functions as a wedding venue.

Beans for Breakfast

You can check the current menu here….


This Guy is Connected…..

Hinton is the great-great-grandson of the mathematician and educator Mary Everest Boole and her husband, the logician George Boole. George Boole’s work eventually became one of the foundations of modern computer science.

Hinton’s father was the entomologist Howard Hinton. His middle name comes from another relative, George Everest, the Surveyor General of India after whom the mountain is named.

Wikipedia

His dad, H. E. Hinton, was a pupa guy.

WLBOTT Wonders: Does AI go through a pupa stage?


Reference: Elder G Explains Neural Networks

Neural networks are a key concept in machine learning and artificial intelligence, and they’re inspired by the way biological brains (like ours) process information. While they are much simpler than actual brains, they mimic some of the same principles.

What is a Neural Network?

A neural network is a collection of artificial neurons (called nodes) that are connected together in layers. These layers process information and “learn” by adjusting the strength of the connections (called weights) between nodes.

Here’s a simplified breakdown:

1. Neurons (Nodes): Each node in a neural network is like a mini processor. It takes input data, applies a mathematical operation, and passes the result to the next layer.

2. Layers:

Input Layer: This is where the data enters the network. Each node in this layer represents a feature (or attribute) of the data. For example, if you’re feeding an image to a network, the input layer might represent the pixel values.

Hidden Layers: These layers sit between the input and output layers. The nodes in hidden layers transform the input in various ways, detecting patterns and relationships. The network can have one or many hidden layers (in deep learning, there are many layers—hence “deep” learning).

Output Layer: This is where the network produces its result. In a classification task (like deciding if an image contains a cat or a dog), the output layer might contain one node for each category (e.g., “cat” and “dog”).

3. Weights: The connections between neurons have weights, which determine how strongly one neuron affects another. Initially, these weights are set randomly, but they are adjusted as the network learns from data.

4. Activation Function: After a neuron processes the inputs, it applies an activation function to determine whether or not to “activate” and send information forward. Popular activation functions include ReLU (Rectified Linear Unit), which outputs a value only if it’s positive, and sigmoid, which squeezes outputs between 0 and 1.

How Does Learning Happen?

Neural networks learn by adjusting their weights through a process called backpropagation. Here’s how it works:

1. Feedforward: Data moves from the input layer, through the hidden layers, to the output layer. The network makes a prediction based on its current weights.

2. Compare Prediction to Reality: The network’s output is compared to the actual answer (called the ground truth). This comparison generates an error (or loss), which tells the network how far off its predictions were.

3. Backpropagation: The network uses the error to adjust the weights. It moves backward through the layers, tweaking the connections to reduce the error for future predictions.

4. Iteration: This process repeats over many iterations (called epochs), and over time, the network learns to make better predictions by fine-tuning the weights.

Why are Neural Networks Powerful?

Neural networks can:

Model Complex Patterns: With enough layers and neurons, they can model incredibly complex relationships in data.

Generalize Well: After being trained on enough data, they can generalize and make accurate predictions on new, unseen data.

Work with Raw Data: For tasks like image recognition, neural networks can directly process raw data (e.g., pixel values) without needing hand-crafted features.

A Real-World Example: Image Classification

Imagine you want to train a neural network to recognize cats in photos.

Input Layer: The input would be the pixel values of the image (e.g., for a 28×28 image, there would be 784 input nodes, one for each pixel).

Hidden Layers: These layers would learn to detect patterns, like edges or textures. As the data moves deeper into the network, the patterns become more complex—like learning what a cat’s eyes or fur looks like.

Output Layer: There would be two nodes, one representing “cat” and the other “no cat.” The node with the highest value would be the network’s prediction.

Over time, the network learns which features (patterns) best distinguish cats from non-cats, and it fine-tunes its weights to improve its accuracy.

Neural networks are behind many modern AI applications, from speech recognition to self-driving cars, and their power lies in their ability to learn directly from data.

Elder G

Leave a Reply

Your email address will not be published. Required fields are marked *