By continuing to use the site, you agree to our use of cookies and to abide by our Terms and Conditions. We in turn value your personal details in accordance with our Privacy Policy.
Please log in or register. Registered visitors get fewer ads.
Yoshua Bengio, Canadian-French computer scientist who received the 2018 ACM A.M. Turing Award, often referred to as the "Nobel Prize of Computing", together with Geoffrey Hinton and Yann LeCun, for their foundational work on deep learning (wikipedia) gave this interview just now on the World Service.
I think we've had discussions on here in the past where people have played down the risks of AI, but this expert speaks with great nervousness about AI models which exhibit 'deception, cheating, lying, blackmailing, and trying to hack the host computer when being shut down.'
He speaks of AI as becoming a 'competitor to humanity' with 'catastrophic risks' and 'threats to democracy'. He says governments aren't taking the real risks seriously, and experimental systems have been seen to develop/create their own goals to harm humans and escape from their constraints.
He references 'the end of humanity' as a possibility several times, and speaks of the phenomenal speed of change and the 'tendency in the last few months indicating that AIs want to break out and get rid of us'.
He says we have a 'window in which we could make right decisions' and that the 'public needs a voice' and needs to educate themselves. He says AI has its own intentions and that this is not a sci-fi movie but experiments going on in labs all over the world today.
---
I recognise that the above is all very vague in a sense, but then I think the implication is that our knowledge and control of the future of AI in our/its world is pretty vague and tenuous too, and perhaps not really in our hands as much as we might wish to think, and that the timeline seems to be very compressed.
Yoshua Bengio seems to me to speak with great authority and seems very worried.
Just wondering what people's thoughts are in June 2025 (and how they may have changed since say June 2024).
Do we just carry on and accentuate the positives, or do we change our world view a bit and step back on this issue whilst we have a chance?
An area we call black box where we don’t fully understand it. (Google CEO). So I don’t think it’s correct to talk about our understanding where it’s at as being simply “wrong” or right.
Also have a look from minute 56 on the video I posted. He’s talking about AI learning the analogies and how it’s going to be far more powerful than humans as it’s going to see analogies that we have never considered. So we are talking about going far beyond data models supplied by humans which are constricted by human understanding. What is that if it’s not learning.
I’ve read Mustapha Sueyman’s Ai the coming wave. It’s hard if not impossible to read that and not have major concerns. Have a read of that if you are interested in the subject.
[Post edited 19 Jun 9:11]
That video link doesn't work for me.
I understand why these things are considered a black box, we ask it things and it answers but there is no debuggable way of answering why it answered the way it did, or why it did.
Theoretically, you could have these things learn on the fly but there's a very good reason you don't - bad data. An AI has no idea how to sort good data from bad so if you let it run wild, it would just end up as a complete mess.
So right now there has to be a human being behind it determining what is and isn't bad data, and hence why they release new models as they tweak things and add more high quality data in.
But even that is not truly learning, it's not having thoughts or consciousness, it's just adding more data.
AI enthusiasts: What do you make of Yoshua Bengio's interview? on 09:25 - Jun 19 by DanTheMan
That video link doesn't work for me.
I understand why these things are considered a black box, we ask it things and it answers but there is no debuggable way of answering why it answered the way it did, or why it did.
Theoretically, you could have these things learn on the fly but there's a very good reason you don't - bad data. An AI has no idea how to sort good data from bad so if you let it run wild, it would just end up as a complete mess.
So right now there has to be a human being behind it determining what is and isn't bad data, and hence why they release new models as they tweak things and add more high quality data in.
But even that is not truly learning, it's not having thoughts or consciousness, it's just adding more data.
If you train a model on "A is B" it will not understand that "B is A". As humans who can reason, we can do this.
Dan - watch the YouTube. read the book.
You have a very coders view of this at the moment but Im referring to the people architecting it. And even they don't really understand where we are or what's coming next.
You frame data models as if they constitute the boundaries of what AI can think about or conceive but as Geoffery Hinton says in that section from 56 minutes, thats just not the case.
Its like giving a talented teenager access to the British library and believing they cant develop ideas beyond the books they read.
You are missing the whole point of AI using the data as a model but then effectively building analogies, thinking beyond the data and developing solutions not considered by humans. Thats the very essence of what AI is about and very quickly becoming. And some of those solutions may not actually be in human interest, more in the AIs interest.
AI enthusiasts: What do you make of Yoshua Bengio's interview? on 08:11 - Jun 19 by DanTheMan
"Anyone can develop an AI model with a good enough compoota."
Sure, if you've hundreds of thousands to millions of pounds to train it properly and the expertise to do it well, and you're not just piggybacking off someone else's model.
And that's without getting into whatever you're training it on.
I'm going to do the counter-point here as someone who has some experience with computer science - the biggest changes to our lives from this I predict, will be.
1. People are going to start getting easily fooled by AI images and videos which is going to make disinformation and increasingly difficult thing for the average person to deal with.
2. Lots of companies are going to start advertising they do AI by which they mean they've integrated with one of the existing LLMs and made a crappy chatbot that is mostly useless.
3. Some companies will try to use AI as a human replacement, which might work for a bit until it starts to hallucinate, as it likes to do, and it causes issues.
This is in the short term at least.
I can't see any evidence of any breakthroughs means that in 5 years "everything will change". Right now, AI (and mostly LLMs are what we are talking about here) is a tool that needs to be wielded by someone who knows what they're doing, not a human replacement.
That might mean that people are more productive, and then that causes job losses.
Yes, I agree with all this. It’s the latest hype cycle — the great AGI breakthrough is always around the corner.
The main danger comes from governments using it to make decisions over humans due to its supposed “intelligence”, or companies using it for, say, production code.
0
AI enthusiasts: What do you make of Yoshua Bengio's interview? on 10:08 - Jun 19 with 609 views
AI enthusiasts: What do you make of Yoshua Bengio's interview? on 09:03 - Jun 19 by thebooks
“These models are based on architecting neural networks based on the human brain” — you can’t just add “based on the human brain” to imply they’re intelligent. They’re not.
For example, an LLM looks at the statistical probability of a token (a word, stem or bit of punctuation) following another within its huge datasets. The neural network provides a means to consider the tokens it’s already output when it calculates what the next one should be — you can’t do this using other methods such as tables as the amounts of data to cross reference are mind-bogglingly vast.
It is still just ouputting tokens based on when they appear in existing texts. It has no idea whether what it’s conveying is true, what it means or anything else.
I can add based on the human brain because thats exactly how Geoffrey Hinton, the guy who architected all this has said himself.
Its called a neural network for a reason.
And they are going to become far more intelligent than any human on the planet.
AI enthusiasts: What do you make of Yoshua Bengio's interview? on 10:08 - Jun 19 by nodge_blue
I can add based on the human brain because thats exactly how Geoffrey Hinton, the guy who architected all this has said himself.
Its called a neural network for a reason.
And they are going to become far more intelligent than any human on the planet.
Yes, I know why they’re called that. The point is they’re used in training LLMs because they provide a method to suggest what token should be output next based on what’s been output so far and what exists in the training data. That’s still not intelligence, it’s just generating tokens probablistically. It still can’t know whether whether the output is true, meaningful, ethical or whatever.
“And they are going to become far more intelligent than any human on the planet.” — Always in the future…
1
AI enthusiasts: What do you make of Yoshua Bengio's interview? on 10:23 - Jun 19 with 585 views
AI enthusiasts: What do you make of Yoshua Bengio's interview? on 10:17 - Jun 19 by thebooks
Yes, I know why they’re called that. The point is they’re used in training LLMs because they provide a method to suggest what token should be output next based on what’s been output so far and what exists in the training data. That’s still not intelligence, it’s just generating tokens probablistically. It still can’t know whether whether the output is true, meaningful, ethical or whatever.
“And they are going to become far more intelligent than any human on the planet.” — Always in the future…
AI enthusiasts: What do you make of Yoshua Bengio's interview? on 10:08 - Jun 19 by nodge_blue
I can add based on the human brain because thats exactly how Geoffrey Hinton, the guy who architected all this has said himself.
Its called a neural network for a reason.
And they are going to become far more intelligent than any human on the planet.
We don't even fully understand how the human brain works - maybe we understand 20/30% of how the brain works , and that's being generous. So how are we going to make machines that are superior to something we really don't understand?
For example, if you didn't understand how a rocket works, how could you improve on its design?
AI enthusiasts: What do you make of Yoshua Bengio's interview? on 13:34 - Jun 19 by blueasfook
We don't even fully understand how the human brain works - maybe we understand 20/30% of how the brain works , and that's being generous. So how are we going to make machines that are superior to something we really don't understand?
For example, if you didn't understand how a rocket works, how could you improve on its design?
I think he is being somewhat sensationalist. I think climate change currently poses a far bigger risk to humanity than AI. Often they use vacuous statements like hacking critical systems or using the internet for its own means or threaten democracy.
I think it is more likely humans use it for bad ends than AI.
One argument I've seen put forward is companies and individuals within the AI field talk up the risk in order to generate income to mitigate against that risk, when the risk isn't substantial compared to other risks.
I also think some of these threats are already critical through other means, yet we are unwilling to deal with these collectively. Take threat to democracy - Elon Musk is a biased actor who bought US votes through lotteries and controls access to what information people see. Disinformation is rampant. Companies pay governments money to take actions in their interest that are not in the interest of the electorate.
As for my day to day life. I use AI to assist me in basic programming tasks. It has got better at this, but the speed of improvement is not a quick as those warning of doom and gloom seem to suggest.
My company also uses AI for other things like voiceovers (TTS) in marketing and Gemini/ChatGPT to refine their written work. This makes us more productive as individuals and a company. Another way to put this is also that we are less reliant on other companies. Now, we've always had a do it ourselves culture, so other service providers aren't missing out on business from us. But I can imagine companies that interact with more service providers scaling that back causing a rise in unemployment in certain service sectors.
[Post edited 19 Jun 15:17]
Submit your 1-24 league prediction here -https://www.twtd.co.uk/forum/514096/page:1 - for the opportunity to get a free Ipswich top.
AI enthusiasts: What do you make of Yoshua Bengio's interview? on 15:15 - Jun 19 by Kropotkin123
I think he is being somewhat sensationalist. I think climate change currently poses a far bigger risk to humanity than AI. Often they use vacuous statements like hacking critical systems or using the internet for its own means or threaten democracy.
I think it is more likely humans use it for bad ends than AI.
One argument I've seen put forward is companies and individuals within the AI field talk up the risk in order to generate income to mitigate against that risk, when the risk isn't substantial compared to other risks.
I also think some of these threats are already critical through other means, yet we are unwilling to deal with these collectively. Take threat to democracy - Elon Musk is a biased actor who bought US votes through lotteries and controls access to what information people see. Disinformation is rampant. Companies pay governments money to take actions in their interest that are not in the interest of the electorate.
As for my day to day life. I use AI to assist me in basic programming tasks. It has got better at this, but the speed of improvement is not a quick as those warning of doom and gloom seem to suggest.
My company also uses AI for other things like voiceovers (TTS) in marketing and Gemini/ChatGPT to refine their written work. This makes us more productive as individuals and a company. Another way to put this is also that we are less reliant on other companies. Now, we've always had a do it ourselves culture, so other service providers aren't missing out on business from us. But I can imagine companies that interact with more service providers scaling that back causing a rise in unemployment in certain service sectors.
[Post edited 19 Jun 15:17]
I just listened to some of that podcast.
What is being sensationalist about reporting of AI behaviour in a pre production environment? The AI has read emails, understood that it’s going to be replaced effectively by an upgrade, understood through emails a lead developer was having an affair and then blackmailed him with that information to try and prevent the upgrade.
And that’s supposed to be AI not able to self learn or think or do malicious things?
AI enthusiasts: What do you make of Yoshua Bengio's interview? on 08:29 - Jun 20 by nodge_blue
I just listened to some of that podcast.
What is being sensationalist about reporting of AI behaviour in a pre production environment? The AI has read emails, understood that it’s going to be replaced effectively by an upgrade, understood through emails a lead developer was having an affair and then blackmailed him with that information to try and prevent the upgrade.
And that’s supposed to be AI not able to self learn or think or do malicious things?
LLMs are not "thinking" in the same way a human is. I've read the safety report, it's a very contrived example they set up to elicit the response it did.
It's been trained on numerous novels where blackmailing someone is a common occurrence, and that's what it does.
Now, that's an issue if you start selling this to people to use, because it could happen, but it's not learning anything. It's been given additional context and told to care about self-preservation and has essentially completed the story of an AI trying to keep itself "alive".
The companies making these AIs love putting these stories out as it drums up interest in their product; they've been at this for years.
AI enthusiasts: What do you make of Yoshua Bengio's interview? on 08:55 - Jun 20 by DanTheMan
LLMs are not "thinking" in the same way a human is. I've read the safety report, it's a very contrived example they set up to elicit the response it did.
It's been trained on numerous novels where blackmailing someone is a common occurrence, and that's what it does.
Now, that's an issue if you start selling this to people to use, because it could happen, but it's not learning anything. It's been given additional context and told to care about self-preservation and has essentially completed the story of an AI trying to keep itself "alive".
The companies making these AIs love putting these stories out as it drums up interest in their product; they've been at this for years.
Dan. I would still suggest you look at the video I posted from roughly the one hour mark for ten minutes. What is consciousness. The definition as we know it is heading for the bin.
AI enthusiasts: What do you make of Yoshua Bengio's interview? on 10:23 - Jun 19 by nodge_blue
Watch from 1 hr 4 minutes
Excellent video, thanks.
"Intellectually you can see the threat, but it's very hard to come to terms with it emotionally."
I think that's a very telling statement.
It ilustrates why a number of 'tech leaders' who tend to have high drive and high intellect, but perhaps lack a normal emotional response, are willing (compelled, perhaps) to express these very concerning ideas, whilst many 'normal' people can understand the concepts but won't really accept the premises. Yet.
It applies equally, I think, to climate change, and perhaps people who don't have children find thinking about the frightening scope of both of these existential threats a little easier to face.
AI enthusiasts: What do you make of Yoshua Bengio's interview? on 09:08 - Jun 20 by NthQldITFC
Excellent video, thanks.
"Intellectually you can see the threat, but it's very hard to come to terms with it emotionally."
I think that's a very telling statement.
It ilustrates why a number of 'tech leaders' who tend to have high drive and high intellect, but perhaps lack a normal emotional response, are willing (compelled, perhaps) to express these very concerning ideas, whilst many 'normal' people can understand the concepts but won't really accept the premises. Yet.
It applies equally, I think, to climate change, and perhaps people who don't have children find thinking about the frightening scope of both of these existential threats a little easier to face.
It's a fascinating subject. I was an IT architect but I don't do that now and in all honesty Im not very interested in IT now. But I do find AI interesting.
On a positive side, maybe AI could help with climate change solutions.
Im trying to be careful with the words Im using in this thread and defer to videos of people far more intelligent than me who are highlighting these issues. But people cant say categorically that AI isn't showing signs of thinking or consciousness.They equate it all to computer code having rules. But the whole neural network is mimicking the brain structure and the question being posed as with all the classic sci fi AI stuff, is whether its organic or digital, what constitutes thinking or consciousness.
AI enthusiasts: What do you make of Yoshua Bengio's interview? on 09:07 - Jun 20 by nodge_blue
Dan. I would still suggest you look at the video I posted from roughly the one hour mark for ten minutes. What is consciousness. The definition as we know it is heading for the bin.
I've listened to it, and it hasn't changed my view. It's hard to tell from the snippet, but to me it doesn't sound like he's talking about LLMs but about AIs in a much broader sense.
AI enthusiasts: What do you make of Yoshua Bengio's interview? on 09:48 - Jun 20 by nodge_blue
It's a fascinating subject. I was an IT architect but I don't do that now and in all honesty Im not very interested in IT now. But I do find AI interesting.
On a positive side, maybe AI could help with climate change solutions.
Im trying to be careful with the words Im using in this thread and defer to videos of people far more intelligent than me who are highlighting these issues. But people cant say categorically that AI isn't showing signs of thinking or consciousness.They equate it all to computer code having rules. But the whole neural network is mimicking the brain structure and the question being posed as with all the classic sci fi AI stuff, is whether its organic or digital, what constitutes thinking or consciousness.
[Post edited 20 Jun 9:48]
Our language (English or whatever verbal language we use) is also very unhelpful in these discussions. We argue about 'thinking' or 'consciousness' but what precise value do those two words have for the purposes of evaluation (and what precise value do other words used any attempts to define them have, or indeed the words I have just written to try to express what I'm thinking about words).
AI enthusiasts: What do you make of Yoshua Bengio's interview? on 09:55 - Jun 20 by NthQldITFC
Our language (English or whatever verbal language we use) is also very unhelpful in these discussions. We argue about 'thinking' or 'consciousness' but what precise value do those two words have for the purposes of evaluation (and what precise value do other words used any attempts to define them have, or indeed the words I have just written to try to express what I'm thinking about words).
AI enthusiasts: What do you make of Yoshua Bengio's interview? on 08:11 - Jun 19 by DanTheMan
"Anyone can develop an AI model with a good enough compoota."
Sure, if you've hundreds of thousands to millions of pounds to train it properly and the expertise to do it well, and you're not just piggybacking off someone else's model.
And that's without getting into whatever you're training it on.
I'm going to do the counter-point here as someone who has some experience with computer science - the biggest changes to our lives from this I predict, will be.
1. People are going to start getting easily fooled by AI images and videos which is going to make disinformation and increasingly difficult thing for the average person to deal with.
2. Lots of companies are going to start advertising they do AI by which they mean they've integrated with one of the existing LLMs and made a crappy chatbot that is mostly useless.
3. Some companies will try to use AI as a human replacement, which might work for a bit until it starts to hallucinate, as it likes to do, and it causes issues.
This is in the short term at least.
I can't see any evidence of any breakthroughs means that in 5 years "everything will change". Right now, AI (and mostly LLMs are what we are talking about here) is a tool that needs to be wielded by someone who knows what they're doing, not a human replacement.
That might mean that people are more productive, and then that causes job losses.
It's in the interest of various parties to talk up AI.
A few interesting studies have been published recently, including one from Apple saying that LLM's are essentially rubbish outside their data set and that we are a long way of AGI.
Of course you can make an AI that is brilliant at Chess (Leela for instance) but it's the push to label these current models as approaching AGI which seems to be incorrect.
That doesn't mean LLM's and AI in general isn't going to be disruptive, people are going to have to make changes and there are certain roles out there where it will perform well in a limited and specific capacity probably at the expense of the existing workers.
SB
1
AI enthusiasts: What do you make of Yoshua Bengio's interview? on 10:11 - Jun 20 with 204 views
AI enthusiasts: What do you make of Yoshua Bengio's interview? on 09:55 - Jun 20 by NthQldITFC
Our language (English or whatever verbal language we use) is also very unhelpful in these discussions. We argue about 'thinking' or 'consciousness' but what precise value do those two words have for the purposes of evaluation (and what precise value do other words used any attempts to define them have, or indeed the words I have just written to try to express what I'm thinking about words).
Here's a good example of the limitations of an LLM.
It cannot do maths unless it has already been fed the answer of a specific calculation.
So if I asked one what 5 x 5 is I imagine it would be able to answer correctly 25 because in lots of places on the internet and in books this would have been written out.
But if I ask it something completely random, it just doesn't know what to do and makes it up
It is not reasoning about the question, it doesn't know it needs to multiply the two numbers together to get an answer, it doesn't even know to say that it can't do the multiplication, it just answers with the next best words which in this case are completely wrong.
All it is doing is looking up information it already has, and if it doesn't fit that, it can't do anything further.
Here's another example, asking it to group random words by how they rhyme:
AI enthusiasts: What do you make of Yoshua Bengio's interview? on 09:49 - Jun 20 by DanTheMan
I've listened to it, and it hasn't changed my view. It's hard to tell from the snippet, but to me it doesn't sound like he's talking about LLMs but about AIs in a much broader sense.
I thought our whole chat was about AI in general as it started with an initial post about AI in general. Isn't a LLM just a sub set of AI anyway focused on presenting a response to an end user?
You can define that for me as Im not sure.
Perhaps we can at least agree that it's incorrect to hold black and white views about whether AI is currently conscious and showing thinking. As GH says he's currently ambivalent about it showing consciousness but he does believe they are thinking.
AI enthusiasts: What do you make of Yoshua Bengio's interview? on 10:11 - Jun 20 by DanTheMan
Here's a good example of the limitations of an LLM.
It cannot do maths unless it has already been fed the answer of a specific calculation.
So if I asked one what 5 x 5 is I imagine it would be able to answer correctly 25 because in lots of places on the internet and in books this would have been written out.
But if I ask it something completely random, it just doesn't know what to do and makes it up
It is not reasoning about the question, it doesn't know it needs to multiply the two numbers together to get an answer, it doesn't even know to say that it can't do the multiplication, it just answers with the next best words which in this case are completely wrong.
All it is doing is looking up information it already has, and if it doesn't fit that, it can't do anything further.
Here's another example, asking it to group random words by how they rhyme:
You get the idea. It is recalling information like a search engine but it can't actually reason about things, it's not thinking.
When I was asking AI to create a Spurs fan action figure with whiskey bottle, paper bag and Kleenex it was coming up with suggestions to make it funnier. The box being in the style of an empty trophy cabinet etc. So while it may not think it does seem to be more advanced than a search engine.
AI enthusiasts: What do you make of Yoshua Bengio's interview? on 10:11 - Jun 20 by DanTheMan
Here's a good example of the limitations of an LLM.
It cannot do maths unless it has already been fed the answer of a specific calculation.
So if I asked one what 5 x 5 is I imagine it would be able to answer correctly 25 because in lots of places on the internet and in books this would have been written out.
But if I ask it something completely random, it just doesn't know what to do and makes it up
It is not reasoning about the question, it doesn't know it needs to multiply the two numbers together to get an answer, it doesn't even know to say that it can't do the multiplication, it just answers with the next best words which in this case are completely wrong.
All it is doing is looking up information it already has, and if it doesn't fit that, it can't do anything further.
Here's another example, asking it to group random words by how they rhyme:
You get the idea. It is recalling information like a search engine but it can't actually reason about things, it's not thinking.
So when GH asks AI what's the similarity between a compost heap and an atomic bomb. How did it come up with the answer if there wasn't a direct match in its database to that question? How did it come up with the correct answer without a form of thinking?
AI enthusiasts: What do you make of Yoshua Bengio's interview? on 10:30 - Jun 20 by nodge_blue
I thought our whole chat was about AI in general as it started with an initial post about AI in general. Isn't a LLM just a sub set of AI anyway focused on presenting a response to an end user?
You can define that for me as Im not sure.
Perhaps we can at least agree that it's incorrect to hold black and white views about whether AI is currently conscious and showing thinking. As GH says he's currently ambivalent about it showing consciousness but he does believe they are thinking.
Yes, I was talking about AI in general at the start of the thread and subsequently, not LLMs specifically - we've been talking at cross-purposes I think.
AI enthusiasts: What do you make of Yoshua Bengio's interview? on 10:40 - Jun 20 by nodge_blue
So when GH asks AI what's the similarity between a compost heap and an atomic bomb. How did it come up with the answer if there wasn't a direct match in its database to that question? How did it come up with the correct answer without a form of thinking?
That's a difficult question to answer, and any answer I do give will be butchering it as I'm not a very good science communicator.