| Anthropic 08:13 - Feb 28 with 554 views | StokieBlue | Good stuff from Anthropic, they have refused to remove or relax the safeguards built into Claude for the US government and military. In response they have had all their contracts cancelled and the government has told all it's contractors they can't use Claude either. OpenAI have stepped into the gap claiming they have the same safeguards but this seems unlikely otherwise why would the government and military want them? They are desperate for cash though given the extreme levels of their investments to try and kick-start AI. As a general point, with regards to coding I was sceptical of these agents but I have actually been very impressed with Claude after using it this week. If you have experience and already know what you want to do and pretty much how you want to do it then it can save a lot of time. SB |  | | |  |
| Anthropic on 08:17 - Feb 28 with 512 views | noggin | You might as well have written that in Japanese SB. I feel old. |  |
|  |
| Anthropic on 08:18 - Feb 28 with 509 views | StokieBlue |
| Anthropic on 08:17 - Feb 28 by noggin | You might as well have written that in Japanese SB. I feel old. |
Pretty sure most of us feel old nowadays. The world moves at a relentless pace. SB |  | |  |
| Anthropic on 08:23 - Feb 28 with 470 views | noggin |
| Anthropic on 08:18 - Feb 28 by StokieBlue | Pretty sure most of us feel old nowadays. The world moves at a relentless pace. SB |
It overwhelms me sometimes. I wonder what the world will look like in 10 and 20 years time. |  |
|  |
| Anthropic on 08:31 - Feb 28 with 423 views | BanksterDebtSlave |
| Anthropic on 08:17 - Feb 28 by noggin | You might as well have written that in Japanese SB. I feel old. |
It may be something to do with tactical nuclear weapons but I'm sure it's fine! https://news.sky.com/story/ai- 'Anthropic, which has said it has no problem in principle with allowing the US military access to its models, is resisting unless Mr Hegseth agrees to their red lines: That their AI isn't used for mass surveillance of US civilians nor for lethal attacks without human oversight.' [Post edited 28 Feb 8:32]
|  |
|  |
| Anthropic on 08:32 - Feb 28 with 426 views | StokieBlue |
| Anthropic on 08:23 - Feb 28 by noggin | It overwhelms me sometimes. I wonder what the world will look like in 10 and 20 years time. |
That will depend very much on whom is allowed to shape it. In relation to the original post and AI with in the military, I saw this article this week which highlights why it's a bad idea at the moment: https://www.newscientist.com/a The main point from the article (can't read it all unless you have a subscription) is: "Leading AIs from OpenAI, Anthropic and Google opted to use nuclear weapons in simulated war games in 95 per cent of cases" That's not ideal..... SB |  | |  |
| Anthropic on 08:39 - Feb 28 with 393 views | Dubtractor |
| Anthropic on 08:31 - Feb 28 by BanksterDebtSlave | It may be something to do with tactical nuclear weapons but I'm sure it's fine! https://news.sky.com/story/ai- 'Anthropic, which has said it has no problem in principle with allowing the US military access to its models, is resisting unless Mr Hegseth agrees to their red lines: That their AI isn't used for mass surveillance of US civilians nor for lethal attacks without human oversight.' [Post edited 28 Feb 8:32]
|
With each passing day it increasingly feels like we're living in the the timeline where the Terminator films are a documentary. |  |
|  |
| Anthropic on 08:47 - Feb 28 with 353 views | BanksterDebtSlave |
| Anthropic on 08:39 - Feb 28 by Dubtractor | With each passing day it increasingly feels like we're living in the the timeline where the Terminator films are a documentary. |
It must just be what we the consumers demand! |  |
|  |
| Anthropic on 10:17 - Feb 28 with 179 views | nrb1985 | Think your last point is increasingly the direction of travel. It’s going to be a tool for tremendous productivity in my opinion but I don’t think knowledge workers are going to be automated away to the extent some recent papers suggest. The fact that so many billions is being spent on CAPEX without much of an improvement in things like hallucination rates makes me think it will also be a more gradual thing rather than a big bang - albeit you can already see signs it’s coming in certain sectors. |  | |  | Login to get fewer ads
| Anthropic on 10:30 - Feb 28 with 150 views | Nthsuffolkblue |
| Anthropic on 08:32 - Feb 28 by StokieBlue | That will depend very much on whom is allowed to shape it. In relation to the original post and AI with in the military, I saw this article this week which highlights why it's a bad idea at the moment: https://www.newscientist.com/a The main point from the article (can't read it all unless you have a subscription) is: "Leading AIs from OpenAI, Anthropic and Google opted to use nuclear weapons in simulated war games in 95 per cent of cases" That's not ideal..... SB |
So, if I am reading this right, the company referred to in the OP is run by people who care more about still being alive than money ... But someone else has stepped into the gap vacated! |  |
|  |
| Anthropic on 10:39 - Feb 28 with 125 views | DanTheMan |
| Anthropic on 10:17 - Feb 28 by nrb1985 | Think your last point is increasingly the direction of travel. It’s going to be a tool for tremendous productivity in my opinion but I don’t think knowledge workers are going to be automated away to the extent some recent papers suggest. The fact that so many billions is being spent on CAPEX without much of an improvement in things like hallucination rates makes me think it will also be a more gradual thing rather than a big bang - albeit you can already see signs it’s coming in certain sectors. |
Hallucination isn't something you can really get rid of - it's inherent to how LLMs work. |  |
|  |
| Anthropic on 11:06 - Feb 28 with 83 views | nrb1985 |
| Anthropic on 10:39 - Feb 28 by DanTheMan | Hallucination isn't something you can really get rid of - it's inherent to how LLMs work. |
Did you see the Citrini note earlier this week out of interest? Caused a bit of a stir but I think it’s meant just as a thought piece rather than their base case. https://www.citriniresearch.co |  | |  |
| |