By continuing to use the site, you agree to our use of cookies and to abide by our Terms and Conditions. We in turn value your personal details in accordance with our Privacy Policy.
Please log in or register. Registered visitors get fewer ads.
I haven't seen it but i see its being reported as death to the IDF. Think thats where it crosses a line. Think supporting Palestine is understandable but not chanting for the death of people.
I have to say after seeing the Louis Theroux documentary, that Israeli settlers are absolutely provoking the Palestinians in conjunction with the army. They are 100 percent making them into a subjugated state.
I don't think all Israelis are like that but these settlers seem to have a cowboy mindset of just going after land they believe is theirs.
It's such a complicated f up that it's hard to see it ever resolving.
I would probably keep Broadhead if he wants to stay but I suspect he may want a move.
Also Muric. Pretty sure he was in the championship team of the year before. Are we really saying he is totally written off? Isn’t it worth another look if he’s had a chance for a physical and mental reset?
Ive been a bit absorbed in the other aspect of this thread but can I say that is a truly heart breaking thing about AI. The idea that human creativity can simply and cheaply be replicated and displacing real human thinking on subjects and also in the actual creative arts.
I suspect enough humans will want that still. I will never read a book written by AI. Or listen to AI music. Not knowingly anyway.
I think you are probably someone who will never accept AI consciousness no matter how close it is to humans, simply because it's not organic and ultimately its all ones and zeros.
Whereas we have electronic pulses going through an organic brain. Not that I fully understand how the brain works.
But the counter argument is that both will come to be seen as valid forms of consciousness (whatever that means) and produce the same end results. GH says that he thinks our very definition of consciousness may have to change.
Anyway look - let's leave it there. My main concern is the impact of AI rather than its state of consciousness which is an opposite view to yourself maybe.
I did post earlier that he was ambivalent (been pretty consistent in that looking back) but we are splitting hairs. But even to be ambivalent right now in this child state of AI with the teenager and adult models quickly coming is quite significant. He does say he they are thinking. And he does believe they have emotions.
So the robot hasn't simply been programmed to run away as in if I see a large robot I run. It's been programmed in a way to interpret danger and use a subjective experience to realise its in danger, Which GH is saying is just as valid an emotion without the physiological response. You are quite entitled to hold a different opinion. But if the result is the same, does it matter?
Isn't his point about looking at the object through a prism a subjective experience? It understands that the prism has distorted the position of the object and understands the position of where the object really is. The chatbot understood and used the words of having a subjective experience.
Possibly. It's probably better to step back and say are AI displaying independent thoughts which could mean they do things autonomously that aren't in the best interests of humans.
As Ive said no ones a winner if we get wiped on by a non conscious, non emotive AI. It's all a matter of opinion as we go along this journey as to where they are with all that.
"people say machines cant have feelings. And people are curiously confident about that. I have no idea why."
Look I'm trying to stay polite with you. Rather than you saying go away and read stuff, im asking you to expand your argument as to why GH is wrong and say why AI isn't currently conscious/sentient/displaying emotions and why it won't for a long time.
I haven't asserted they are conscious. I have said there are serious noble prize winners who would debate that they are now and almost the whole AI industry saying that we are moving to a position where we may reach a tipping point and the majority of people feel they are displaying emotions and consciousness.
All you do is just dismiss it all and say its for the future and then don't back that up. Give me a few paragraphs on it.
So you may not have read all this thread. But Elon musk and GH put a rough 10 to 20 percent risk of AI killing us in the next 20 years. As we have also said, you can say that is a non thinking, non conscious AI doing that if you wish.