By continuing to use the site, you agree to our use of cookies and to abide by our Terms and Conditions. We in turn value your personal details in accordance with our Privacy Policy.
Please log in or register. Registered visitors get fewer ads.
It’s about the imminent coming technological AI wave and its sheer scale, power and level of democratised access that’s going to make the internet wave look like peanuts. It may sound like hyperbole but perhaps the next 30 years could see the biggest change in history.
I’m about half way through it now. whilst the book emphasises the ability for these AI models to essentially self learn and work through so many complex permutations to any given problem, it more than highlights the problems coming with it.
A technology that is going to be available to anyone for any malicious as well as good purpose. A technology that governments don’t really understand or can attempt to restrict. No wonder even Musk was suggesting a pause in its development.
Self learning Malwear viruses to keep evolving and exploiting all weaknesses. Robots the size of bees capable of assassinations. Deep fakes. Everything available to anyone because of the nature of how technology becomes cheaper and mass available.
I thought twice about posting this as it sounds like a negative post with just one more world problem. But I think it is and I’m finding the book harder to read as I go. There are of course good things and maybe we need AI to assess all the potential solutions to global warming and to look at patient scans to assess cancers quicker than any doctor can.
I think he’s about to suggest some measures for containment but I’m not optimistic.
I’d be interested in anyone’s else take on it if they’ve read it.
Humanity is not known for its caution and self-restraint in the face of shiny new toys. The fact that governments and individuals waffle on about the ridiculous concept of 'Regulation' in a freely connected, open source, effectively zero cost to modify/replicate arena is about as idiotic as it gets. The point you make that the technology is available to everybody is critical, as is its rapid evolutionary potential.
Pretty terrifying, and as far as I can tell entirely plausible.
There seems no chance of government regulating AI development in any effective way, nor of the tech sector doing it themselves.
There is a pretty good chance we are either literally doomed (the man-in-a-garage inventing viruses scenario) or are all going to be unemployed in a few years leading to economic Armageddon.
Happy days.
We are similarly clearly incapable of moderating climate change.
1
Has anyone read the coming wave by Mustafa Suleyman? on 10:00 - Dec 7 with 3939 views
Has anyone read the coming wave by Mustafa Suleyman? on 09:43 - Dec 7 by NthQldITFC
Humanity is not known for its caution and self-restraint in the face of shiny new toys. The fact that governments and individuals waffle on about the ridiculous concept of 'Regulation' in a freely connected, open source, effectively zero cost to modify/replicate arena is about as idiotic as it gets. The point you make that the technology is available to everybody is critical, as is its rapid evolutionary potential.
He dedicates early chapters to previous technology waves and basically says you cant stop them.
The trouble with this one is for the first time the wave goes far beyond human capabilities and we totally lose any semblance of control over it and for some "actors" they actively promote it to become a self learning destructive force.
Everyone gets access to the best legal and medical advice at the end of a device is more of a positive spin. And AI is used to work through every possible permutation to find the best candidates for new antibiotics etc.
But paragraphs like this are unsettling...
"A paradox of the coming wave is that technologies are largely beyond our abilities to comprehend at a granular level and yet still within our ability to create and use. In AI, the neural networks moving toward autonomy are, at present, not explainable"
Has anyone read the coming wave by Mustafa Suleyman? on 10:01 - Dec 7 by geg1992
I personally think AI is the biggest threat to our existence. I find it terrifying.
It can do so many great things, but in the wrong hands, who knows what misery it could cause. Sadly, there are a lot of evil people in the world.
Im starting to think this is up there with global warming. But at least with that we can evaluate it and wrap it into a problem that's in theory within our capabilities to solve.
This is something that we don't even understand its current state or its capabilities, with no way of knowing how to limit those capabilities. No wonder Musk was dismissive at the conference to discuss this with world leaders.
In short, AI as the potential ability to adapt, learn and change far quicker than humans do. The single biggest difference as I see it is that humans choose not to learn and adapt, we ignore evidence. AI won't.
Ade Akinbiyi couldn't hit a cows arse with a banjo...
Has anyone read the coming wave by Mustafa Suleyman? on 10:01 - Dec 7 by Ewan_Oozami
I can't remember who said it but the phrase goes something like, "A fully AI-optimised world has no room for human beings"
Except for the fact it would have been humans doing the optimising and designing the system for their benefit. Much like we've been doing since the beginning of civilisation - working to shape the environment* rather than being slaves to it.
A lot of the fear is based on a sci-fi fantasy of computers with human levels of self-consciousness, responding to situations as a human would do, just with less empathy and utter ruthlessness**. An AI running your central heating system is not going to suddenly kill you just to save a bit of gas (as a human might), it's simply going to get on with its job until told to stop. Why would it do anything otherwise?
The danger is not AI, but how nasty people may use it to enhance their activities. Which is no different to the car, the phone, TV and computers. Good people will also use it to enhance their activities.
Humanity is becoming too smart for its own good. We seem to be hell-bent on developing technologies to destroy ourselves. Nuclear weapons, deadly viruses, artificial intelligence, robots. I'd agree that AI has the potential to become our number one existential threat. I don't know if we will ever get to some kind of SkyNet scenario where the AI becomes self-aware and decides to wipe out humanity. My worry is more it will lead to mass poverty and a pretty miserable existence for humans as we will have no work to do and little opportunity to make money. Therefore no income, and we will be reliant on some kind of basic support.
"A+++++", "Great Comms, would recommend", "Thank you, the 12 inch black mamba is just perfect" - Ebay.
Has anyone read the coming wave by Mustafa Suleyman? on 10:42 - Dec 7 by Guthrum
Except for the fact it would have been humans doing the optimising and designing the system for their benefit. Much like we've been doing since the beginning of civilisation - working to shape the environment* rather than being slaves to it.
A lot of the fear is based on a sci-fi fantasy of computers with human levels of self-consciousness, responding to situations as a human would do, just with less empathy and utter ruthlessness**. An AI running your central heating system is not going to suddenly kill you just to save a bit of gas (as a human might), it's simply going to get on with its job until told to stop. Why would it do anything otherwise?
The danger is not AI, but how nasty people may use it to enhance their activities. Which is no different to the car, the phone, TV and computers. Good people will also use it to enhance their activities.
People are the problem, not AI.
* In the widest sense, not purely ecology.
** This sounds more like a politician than an AI.
But people without AI are not such a dangerous problem.
I really don't like the sort of 'AI running your central heating system is not going to suddenly kill you' quotes. They strike me as being complacent and taking a standpoint on an issue at too low and irrelevant a level, and consequently dangerous. AI is a tool, sure, but it is tool potentially capable of (at the very least) empowering many more 'bad' humans to do much more 'bad' things than just a few have been capable of before. It's a great big step change in risk level and should not be seen as just another tool.
As a tool it can be brilliantly good or hideously bad. You can give a kid a benign set square to play with and be pretty sure that neither the kid nor the set square are going to do too much damage to themselves or anyone else, but give a million kids a million benign chainsaws plus the potential to mount them on hackable or buggy self driving cars, and you might have a problem. That might requires, at least, great caution or concern about the tool itself, because on the human side there will always be, for want of a better word, evil.
Has anyone read the coming wave by Mustafa Suleyman? on 11:30 - Dec 7 by NthQldITFC
But people without AI are not such a dangerous problem.
I really don't like the sort of 'AI running your central heating system is not going to suddenly kill you' quotes. They strike me as being complacent and taking a standpoint on an issue at too low and irrelevant a level, and consequently dangerous. AI is a tool, sure, but it is tool potentially capable of (at the very least) empowering many more 'bad' humans to do much more 'bad' things than just a few have been capable of before. It's a great big step change in risk level and should not be seen as just another tool.
As a tool it can be brilliantly good or hideously bad. You can give a kid a benign set square to play with and be pretty sure that neither the kid nor the set square are going to do too much damage to themselves or anyone else, but give a million kids a million benign chainsaws plus the potential to mount them on hackable or buggy self driving cars, and you might have a problem. That might requires, at least, great caution or concern about the tool itself, because on the human side there will always be, for want of a better word, evil.
Which is why chainsaws are not freely avilable for children to buy and there are rules on their useage.
Rather flippant, but my point is that highly destructive tools end up being very tightly controlled and regulated. Take nuclear weapons, for example. There are extremely strong international systems in place to limit their spread and detect any useage. Even testing was, until recently, banned by treaty and still is by convention. Even dodgy national leaders don't want to wipe out themselves and their entire populations.
AI tends to be for specific and discrete purposes. One controlling a country's power generation is similar to one controlling your central heating, just scaled up. But that's all it does, control the power generation. It has no reason to start doing anything else. It has no way of "ganging up" with other systems to wipe out humanity.
The biggest threat from AI is that many basic analysis, logistic and organisational tasks can be done quicker and cheaper by computers than humans. They will take our jobs. That's a process which has already been going on for decades*.
* About three centuries, if you include labour-saving mechanisation in the Industrial Revolution.
Has anyone read the coming wave by Mustafa Suleyman? on 11:49 - Dec 7 by Guthrum
Which is why chainsaws are not freely avilable for children to buy and there are rules on their useage.
Rather flippant, but my point is that highly destructive tools end up being very tightly controlled and regulated. Take nuclear weapons, for example. There are extremely strong international systems in place to limit their spread and detect any useage. Even testing was, until recently, banned by treaty and still is by convention. Even dodgy national leaders don't want to wipe out themselves and their entire populations.
AI tends to be for specific and discrete purposes. One controlling a country's power generation is similar to one controlling your central heating, just scaled up. But that's all it does, control the power generation. It has no reason to start doing anything else. It has no way of "ganging up" with other systems to wipe out humanity.
The biggest threat from AI is that many basic analysis, logistic and organisational tasks can be done quicker and cheaper by computers than humans. They will take our jobs. That's a process which has already been going on for decades*.
* About three centuries, if you include labour-saving mechanisation in the Industrial Revolution.
[Post edited 7 Dec 2023 11:51]
The book covers most of these arguments.
The reason why nuclear proliferation hasn’t really got out of hand is that it is both hugely difficult and hugely expensive to develop weapons. Sure, international treaties help, but the analogy doesn’t really work.
I’m much less convinced by the “it’ll develop sentience and destroy us” argument.
I’m pretty terrified by how easy it will be/is for some nutter with a credit card and a pc to engineer a virus that could kill billions
[Post edited 7 Dec 2023 12:06]
0
Has anyone read the coming wave by Mustafa Suleyman? on 12:15 - Dec 7 with 3583 views
Has anyone read the coming wave by Mustafa Suleyman? on 11:49 - Dec 7 by Guthrum
Which is why chainsaws are not freely avilable for children to buy and there are rules on their useage.
Rather flippant, but my point is that highly destructive tools end up being very tightly controlled and regulated. Take nuclear weapons, for example. There are extremely strong international systems in place to limit their spread and detect any useage. Even testing was, until recently, banned by treaty and still is by convention. Even dodgy national leaders don't want to wipe out themselves and their entire populations.
AI tends to be for specific and discrete purposes. One controlling a country's power generation is similar to one controlling your central heating, just scaled up. But that's all it does, control the power generation. It has no reason to start doing anything else. It has no way of "ganging up" with other systems to wipe out humanity.
The biggest threat from AI is that many basic analysis, logistic and organisational tasks can be done quicker and cheaper by computers than humans. They will take our jobs. That's a process which has already been going on for decades*.
* About three centuries, if you include labour-saving mechanisation in the Industrial Revolution.
[Post edited 7 Dec 2023 11:51]
"Rather flippant, but my point is that highly destructive tools end up being very tightly controlled and regulated."
This isn't easy to do with regulation regarding AI.
The only way you can feasibly do this is by restricting the physical components needed to build them as the rest is just maths already out there. It can't be unmade.
It's the same argument for banning encryption, the cat is out of the bag, it can't be done.
General AI is where things start to get a little scary and then we have to start considering whether, in all seriousness, they are considered more than mere machine.
Has anyone read the coming wave by Mustafa Suleyman? on 11:49 - Dec 7 by Guthrum
Which is why chainsaws are not freely avilable for children to buy and there are rules on their useage.
Rather flippant, but my point is that highly destructive tools end up being very tightly controlled and regulated. Take nuclear weapons, for example. There are extremely strong international systems in place to limit their spread and detect any useage. Even testing was, until recently, banned by treaty and still is by convention. Even dodgy national leaders don't want to wipe out themselves and their entire populations.
AI tends to be for specific and discrete purposes. One controlling a country's power generation is similar to one controlling your central heating, just scaled up. But that's all it does, control the power generation. It has no reason to start doing anything else. It has no way of "ganging up" with other systems to wipe out humanity.
The biggest threat from AI is that many basic analysis, logistic and organisational tasks can be done quicker and cheaper by computers than humans. They will take our jobs. That's a process which has already been going on for decades*.
* About three centuries, if you include labour-saving mechanisation in the Industrial Revolution.
[Post edited 7 Dec 2023 11:51]
Nuclear weapons and AI are the very antetheses of each other in terms of control and regulation potential though. AI software can re replicated at effectively zero cost, transported and stored in the ether, everywhere and nowhere, and can be easily modified to remove any software safeguards if open-source or practically reverse engineered on existing frameworks if not.
I realise of course, that there has to be a (probably) deliberate human action to initiate a bad outcome, but AI as a general concept applicable everywhere has step change potential to enable those bad outcomes to happen. Focussing on what it is right now in discrete applications, in my opinion takes your eye of the fundamental issue - which admittedly is realistically unavoidable so maybe we should just shut our eyes and let it happen, a la AGW.
Has anyone read the coming wave by Mustafa Suleyman? on 10:42 - Dec 7 by Guthrum
Except for the fact it would have been humans doing the optimising and designing the system for their benefit. Much like we've been doing since the beginning of civilisation - working to shape the environment* rather than being slaves to it.
A lot of the fear is based on a sci-fi fantasy of computers with human levels of self-consciousness, responding to situations as a human would do, just with less empathy and utter ruthlessness**. An AI running your central heating system is not going to suddenly kill you just to save a bit of gas (as a human might), it's simply going to get on with its job until told to stop. Why would it do anything otherwise?
The danger is not AI, but how nasty people may use it to enhance their activities. Which is no different to the car, the phone, TV and computers. Good people will also use it to enhance their activities.
People are the problem, not AI.
* In the widest sense, not purely ecology.
** This sounds more like a politician than an AI.
I think to say people are the problem and not AI isn't quite right. The AI goes beyond human control as we don't understand how it's evolving and therefore cant assume that it will naturally be doing the best things as we would see them.
Has anyone read the coming wave by Mustafa Suleyman? on 12:21 - Dec 7 by DanTheMan
"Rather flippant, but my point is that highly destructive tools end up being very tightly controlled and regulated."
This isn't easy to do with regulation regarding AI.
The only way you can feasibly do this is by restricting the physical components needed to build them as the rest is just maths already out there. It can't be unmade.
It's the same argument for banning encryption, the cat is out of the bag, it can't be done.
General AI is where things start to get a little scary and then we have to start considering whether, in all seriousness, they are considered more than mere machine.
What physical components though? If we're talking about specialist applications running on specialist hardware, maybe it can be restricted up to a point for a while (until an evil genius decides to build one in his secret lab!), but not really.
In terms of a general AI framework with access to the Internet and potentially private human research databases, effective linguistic and mathematical interpretors, powerful learning algorithms, sufficient processing power, buffers and storage and feedback capabilities to the Internet/wider world - as you say, it's unstoppable.
Has anyone read the coming wave by Mustafa Suleyman? on 12:52 - Dec 7 by NthQldITFC
What physical components though? If we're talking about specialist applications running on specialist hardware, maybe it can be restricted up to a point for a while (until an evil genius decides to build one in his secret lab!), but not really.
In terms of a general AI framework with access to the Internet and potentially private human research databases, effective linguistic and mathematical interpretors, powerful learning algorithms, sufficient processing power, buffers and storage and feedback capabilities to the Internet/wider world - as you say, it's unstoppable.
The most expensive part is what are essentially specialised graphics cards, but for the models that do the really funky things, you need hundreds of thousands of pounds and only a couple of companies make them, the biggest one being nVidia. At the moment just getting your hands on enough of them is one of the things that is slowing down the growth of competitors to stuff like OpenAI.
Control of these things is where the geopolitics starts coming in, which you can see with the U.S. banning the sale of some AI chips to China which massively hampered nVidia.
Has anyone read the coming wave by Mustafa Suleyman? on 13:23 - Dec 7 by DanTheMan
The most expensive part is what are essentially specialised graphics cards, but for the models that do the really funky things, you need hundreds of thousands of pounds and only a couple of companies make them, the biggest one being nVidia. At the moment just getting your hands on enough of them is one of the things that is slowing down the growth of competitors to stuff like OpenAI.
Control of these things is where the geopolitics starts coming in, which you can see with the U.S. banning the sale of some AI chips to China which massively hampered nVidia.
Presumably though, theoretically at least, the same learning could occur on a more CPU-focussed setup with older GPUs, and distributed over a wider network, given enough time?
For cut-throat commercial competition (at the high altar of the great religion!) I can see why the control of cutting edge hardware could limit one entity in competition with another and make that first entity's position ineffective. But 2nd or 3rd generation back hardware will get flogged off to anyone who wants it if it doesn't interfere with top end commercial competition, and 18 months later you're where you thought you were not.
Has anyone read the coming wave by Mustafa Suleyman? on 13:52 - Dec 7 by NthQldITFC
Presumably though, theoretically at least, the same learning could occur on a more CPU-focussed setup with older GPUs, and distributed over a wider network, given enough time?
For cut-throat commercial competition (at the high altar of the great religion!) I can see why the control of cutting edge hardware could limit one entity in competition with another and make that first entity's position ineffective. But 2nd or 3rd generation back hardware will get flogged off to anyone who wants it if it doesn't interfere with top end commercial competition, and 18 months later you're where you thought you were not.
CPUs are practically useless for machine learning purposes. It's technically possible but just for real-world applications it would never, ever work.
Potentially older GPUs could do it, I've built some models at home, but the ones to do anything complicated require tens of thousands of high-spec and specialised GPUs.
Now, it's also worth noting that usually you wouldn't bother to buy thousands, you only need to train the data once. Chances are a malicious actor would funnel the money somewhere, rent the GPUs to train the model and then it would only need a few GPUs to run the thing. You only need thousands if you are someone like OpenAI who allows people to access their models.
Here's an interesting blog on it, but this also goes to another point - you still need to get the data.
Now, for someone like China, not an issue, they collect this stuff and can do whatever they want with it. But if you're a smaller malicious actor, these sorts of data sets are very difficult to find. The training bit, and having the skills to know how to train data sets, are key otherwise you end up with a mess of an AI.