By continuing to use the site, you agree to our use of cookies and to abide by our Terms and Conditions. We in turn value your personal details in accordance with our Privacy Policy.
Please log in or register. Registered visitors get fewer ads.
Yoshua Bengio, Canadian-French computer scientist who received the 2018 ACM A.M. Turing Award, often referred to as the "Nobel Prize of Computing", together with Geoffrey Hinton and Yann LeCun, for their foundational work on deep learning (wikipedia) gave this interview just now on the World Service.
I think we've had discussions on here in the past where people have played down the risks of AI, but this expert speaks with great nervousness about AI models which exhibit 'deception, cheating, lying, blackmailing, and trying to hack the host computer when being shut down.'
He speaks of AI as becoming a 'competitor to humanity' with 'catastrophic risks' and 'threats to democracy'. He says governments aren't taking the real risks seriously, and experimental systems have been seen to develop/create their own goals to harm humans and escape from their constraints.
He references 'the end of humanity' as a possibility several times, and speaks of the phenomenal speed of change and the 'tendency in the last few months indicating that AIs want to break out and get rid of us'.
He says we have a 'window in which we could make right decisions' and that the 'public needs a voice' and needs to educate themselves. He says AI has its own intentions and that this is not a sci-fi movie but experiments going on in labs all over the world today.
---
I recognise that the above is all very vague in a sense, but then I think the implication is that our knowledge and control of the future of AI in our/its world is pretty vague and tenuous too, and perhaps not really in our hands as much as we might wish to think, and that the timeline seems to be very compressed.
Yoshua Bengio seems to me to speak with great authority and seems very worried.
Just wondering what people's thoughts are in June 2025 (and how they may have changed since say June 2024).
Do we just carry on and accentuate the positives, or do we change our world view a bit and step back on this issue whilst we have a chance?
An area we call black box where we don’t fully understand it. (Google CEO). So I don’t think it’s correct to talk about our understanding where it’s at as being simply “wrong” or right.
Also have a look from minute 56 on the video I posted. He’s talking about AI learning the analogies and how it’s going to be far more powerful than humans as it’s going to see analogies that we have never considered. So we are talking about going far beyond data models supplied by humans which are constricted by human understanding. What is that if it’s not learning.
I’ve read Mustapha Sueyman’s Ai the coming wave. It’s hard if not impossible to read that and not have major concerns. Have a read of that if you are interested in the subject.
[Post edited 19 Jun 9:11]
That video link doesn't work for me.
I understand why these things are considered a black box, we ask it things and it answers but there is no debuggable way of answering why it answered the way it did, or why it did.
Theoretically, you could have these things learn on the fly but there's a very good reason you don't - bad data. An AI has no idea how to sort good data from bad so if you let it run wild, it would just end up as a complete mess.
So right now there has to be a human being behind it determining what is and isn't bad data, and hence why they release new models as they tweak things and add more high quality data in.
But even that is not truly learning, it's not having thoughts or consciousness, it's just adding more data.
AI enthusiasts: What do you make of Yoshua Bengio's interview? on 09:25 - Jun 19 by DanTheMan
That video link doesn't work for me.
I understand why these things are considered a black box, we ask it things and it answers but there is no debuggable way of answering why it answered the way it did, or why it did.
Theoretically, you could have these things learn on the fly but there's a very good reason you don't - bad data. An AI has no idea how to sort good data from bad so if you let it run wild, it would just end up as a complete mess.
So right now there has to be a human being behind it determining what is and isn't bad data, and hence why they release new models as they tweak things and add more high quality data in.
But even that is not truly learning, it's not having thoughts or consciousness, it's just adding more data.
If you train a model on "A is B" it will not understand that "B is A". As humans who can reason, we can do this.
Dan - watch the YouTube. read the book.
You have a very coders view of this at the moment but Im referring to the people architecting it. And even they don't really understand where we are or what's coming next.
You frame data models as if they constitute the boundaries of what AI can think about or conceive but as Geoffery Hinton says in that section from 56 minutes, thats just not the case.
Its like giving a talented teenager access to the British library and believing they cant develop ideas beyond the books they read.
You are missing the whole point of AI using the data as a model but then effectively building analogies, thinking beyond the data and developing solutions not considered by humans. Thats the very essence of what AI is about and very quickly becoming. And some of those solutions may not actually be in human interest, more in the AIs interest.
AI enthusiasts: What do you make of Yoshua Bengio's interview? on 08:11 - Jun 19 by DanTheMan
"Anyone can develop an AI model with a good enough compoota."
Sure, if you've hundreds of thousands to millions of pounds to train it properly and the expertise to do it well, and you're not just piggybacking off someone else's model.
And that's without getting into whatever you're training it on.
I'm going to do the counter-point here as someone who has some experience with computer science - the biggest changes to our lives from this I predict, will be.
1. People are going to start getting easily fooled by AI images and videos which is going to make disinformation and increasingly difficult thing for the average person to deal with.
2. Lots of companies are going to start advertising they do AI by which they mean they've integrated with one of the existing LLMs and made a crappy chatbot that is mostly useless.
3. Some companies will try to use AI as a human replacement, which might work for a bit until it starts to hallucinate, as it likes to do, and it causes issues.
This is in the short term at least.
I can't see any evidence of any breakthroughs means that in 5 years "everything will change". Right now, AI (and mostly LLMs are what we are talking about here) is a tool that needs to be wielded by someone who knows what they're doing, not a human replacement.
That might mean that people are more productive, and then that causes job losses.
Yes, I agree with all this. It’s the latest hype cycle — the great AGI breakthrough is always around the corner.
The main danger comes from governments using it to make decisions over humans due to its supposed “intelligence”, or companies using it for, say, production code.
0
AI enthusiasts: What do you make of Yoshua Bengio's interview? on 10:08 - Jun 19 with 268 views
AI enthusiasts: What do you make of Yoshua Bengio's interview? on 09:03 - Jun 19 by thebooks
“These models are based on architecting neural networks based on the human brain” — you can’t just add “based on the human brain” to imply they’re intelligent. They’re not.
For example, an LLM looks at the statistical probability of a token (a word, stem or bit of punctuation) following another within its huge datasets. The neural network provides a means to consider the tokens it’s already output when it calculates what the next one should be — you can’t do this using other methods such as tables as the amounts of data to cross reference are mind-bogglingly vast.
It is still just ouputting tokens based on when they appear in existing texts. It has no idea whether what it’s conveying is true, what it means or anything else.
I can add based on the human brain because thats exactly how Geoffrey Hinton, the guy who architected all this has said himself.
Its called a neural network for a reason.
And they are going to become far more intelligent than any human on the planet.
AI enthusiasts: What do you make of Yoshua Bengio's interview? on 10:08 - Jun 19 by nodge_blue
I can add based on the human brain because thats exactly how Geoffrey Hinton, the guy who architected all this has said himself.
Its called a neural network for a reason.
And they are going to become far more intelligent than any human on the planet.
Yes, I know why they’re called that. The point is they’re used in training LLMs because they provide a method to suggest what token should be output next based on what’s been output so far and what exists in the training data. That’s still not intelligence, it’s just generating tokens probablistically. It still can’t know whether whether the output is true, meaningful, ethical or whatever.
“And they are going to become far more intelligent than any human on the planet.” — Always in the future…
1
AI enthusiasts: What do you make of Yoshua Bengio's interview? on 10:23 - Jun 19 with 244 views
AI enthusiasts: What do you make of Yoshua Bengio's interview? on 10:17 - Jun 19 by thebooks
Yes, I know why they’re called that. The point is they’re used in training LLMs because they provide a method to suggest what token should be output next based on what’s been output so far and what exists in the training data. That’s still not intelligence, it’s just generating tokens probablistically. It still can’t know whether whether the output is true, meaningful, ethical or whatever.
“And they are going to become far more intelligent than any human on the planet.” — Always in the future…
AI enthusiasts: What do you make of Yoshua Bengio's interview? on 10:08 - Jun 19 by nodge_blue
I can add based on the human brain because thats exactly how Geoffrey Hinton, the guy who architected all this has said himself.
Its called a neural network for a reason.
And they are going to become far more intelligent than any human on the planet.
We don't even fully understand how the human brain works - maybe we understand 20/30% of how the brain works , and that's being generous. So how are we going to make machines that are superior to something we really don't understand?
For example, if you didn't understand how a rocket works, how could you improve on its design?
AI enthusiasts: What do you make of Yoshua Bengio's interview? on 13:34 - Jun 19 by blueasfook
We don't even fully understand how the human brain works - maybe we understand 20/30% of how the brain works , and that's being generous. So how are we going to make machines that are superior to something we really don't understand?
For example, if you didn't understand how a rocket works, how could you improve on its design?
I think he is being somewhat sensationalist. I think climate change currently poses a far bigger risk to humanity than AI. Often they use vacuous statements like hacking critical systems or using the internet for its own means or threaten democracy.
I think it is more likely humans use it for bad ends than AI.
One argument I've seen put forward is companies and individuals within the AI field talk up the risk in order to generate income to mitigate against that risk, when the risk isn't substantial compared to other risks.
I also think some of these threats are already critical through other means, yet we are unwilling to deal with these collectively. Take threat to democracy - Elon Musk is a biased actor who bought US votes through lotteries and controls access to what information people see. Disinformation is rampant. Companies pay governments money to take actions in their interest that are not in the interest of the electorate.
As for my day to day life. I use AI to assist me in basic programming tasks. It has got better at this, but the speed of improvement is not a quick as those warning of doom and gloom seem to suggest.
My company also uses AI for other things like voiceovers (TTS) in marketing and Gemini/ChatGPT to refine their written work. This makes us more productive as individuals and a company. Another way to put this is also that we are less reliant on other companies. Now, we've always had a do it ourselves culture, so other service providers aren't missing out on business from us. But I can imagine companies that interact with more service providers scaling that back causing a rise in unemployment in certain service sectors.
[Post edited 19 Jun 15:17]
Submit your 1-24 league prediction here -https://www.twtd.co.uk/forum/514096/page:1 - for the opportunity to get a free Ipswich top.