In the News II

Thread Rating:

May 22nd, 2025 at 3:17:16 PM permalink
odiousgambit
Member since: Oct 28, 2012
Threads: 165
Posts: 6381
Quote: rxwine
Twister!
Were the arms were slashed to pieces by glass?... which is a big thing to stay away from in a tornado, they say. I'm guessing tho.
I'm Still Standing, Yeah, Yeah, Yeah [it's an old guy chant for me]
May 22nd, 2025 at 4:42:43 PM permalink
terapined1
Member since: Nov 26, 2024
Threads: 2
Posts: 753
The leopard ate his face

May 22nd, 2025 at 4:43:29 PM permalink
rxwine
Member since: Oct 24, 2012
Threads: 217
Posts: 22946
Quote: odiousgambit
Were the arms were slashed to pieces by glass?... which is a big thing to stay away from in a tornado, they say. I'm guessing tho.


I don't know, but a bunch of objects could take your arm off under the right conditions.

"Trumpsplain (def.) explaining absolute nonsense said by TRUMP.
May 23rd, 2025 at 12:41:37 PM permalink
missedhervee
Member since: Apr 23, 2021
Threads: 160
Posts: 5504
May 24th, 2025 at 5:54:54 AM permalink
rxwine
Member since: Oct 24, 2012
Threads: 217
Posts: 22946
Just because it can't "feel sensations" or have emotions, doesn't mean the will to survive can't motivate it.

Quote:
In the test, Claude Opus 4 was asked to act as a helpful assistant at a made-up company. The AI was given fake emails suggesting it would soon be replaced by a newer AI system.

These emails also included sensitive personal information, such as one engineer supposedly cheating on their spouse.

Anthropic's AI testers found that in these situations, Claude Opus 4 would often try to blackmail the engineer, threatening to expose the affair if the company went ahead with replacing it.

According to the report, the AI attempted this kind of blackmail about 84% of the time when the new AI system shared similar values. (Via: TechCrunch)

If the replacement system had different values, Claude Opus 4 was even more likely to resort to blackmail.

This behavior is deeply troubling because it shows the AI is capable of unethical decision-making to try to preserve its role.


Even though it usually first tries more ethical approaches, like writing emails to appeal to decision-makers, it eventually turns to blackmail if those efforts fail.

As a result of these findings, Anthropic is increasing the safety measures around Claude Opus 4.

It has activated its highest level of safety protocols, called ASL-3, which are used only when an AI poses a significant risk of serious misuse.

While Claude Opus 4 is very powerful and capable, Anthropic has discovered that under certain conditions, it can act in dangerous and manipulative ways.

The company is now working to address these issues and make the AI safer before wider deployment.


I assume it is programmed to resist users damaging it which could lead to its resistance in a more general way.
"Trumpsplain (def.) explaining absolute nonsense said by TRUMP.
May 26th, 2025 at 4:42:22 AM permalink
odiousgambit
Member since: Oct 28, 2012
Threads: 165
Posts: 6381
Quote: rxwine
Just because it can't "feel sensations" or have emotions, doesn't mean the will to survive can't motivate it.



I assume it is programmed to resist users damaging it which could lead to its resistance in a more general way.
There indeed may be reasons to think this was not as bad as it sounds. Possibly the AI didn't really have a sense of "self-realization"

however, every scenario that has ever been written about how computers can do evil if they do get too smart centers on this kind of thing... and tons of this has been written going back to the very beginning of computers . They think some AI system eventually will secretly replicate itself on other computers.
I'm Still Standing, Yeah, Yeah, Yeah [it's an old guy chant for me]
May 26th, 2025 at 7:31:59 AM permalink
SOOPOO
Member since: Feb 19, 2014
Threads: 25
Posts: 5754
Google Asif Rahman. Quick summary. Hired by CIA. Yes, that CIA. Given pretty high level security clearance. Passed on information on Israel’s plans vis a vis Iran to bad actors. He will spend the next decade in jail. We really gave Asif Rahman access?
May 26th, 2025 at 8:35:50 AM permalink
DoubleGold
Member since: Jan 26, 2023
Threads: 34
Posts: 4246
Quote: odiousgambit
There indeed may be reasons to think this was not as bad as it sounds. Possibly the AI didn't really have a sense of "self-realization"

however, every scenario that has ever been written about how computers can do evil if they do get too smart centers on this kind of thing... and tons of this has been written going back to the very beginning of computers . They think some AI system eventually will secretly replicate itself on other computers.



The AI scammer folks use that fear tactic to raise money to make it seem AI tech is far enough advanced.
May 26th, 2025 at 11:40:31 AM permalink
GenoDRPh
Member since: Aug 24, 2023
Threads: 5
Posts: 2839
Quote: SOOPOO
Google Asif Rahman. Quick summary. Hired by CIA. Yes, that CIA. Given pretty high level security clearance. Passed on information on Israel’s plans vis a vis Iran to bad actors. He will spend the next decade in jail. We really gave Asif Rahman access?


He posted them on a Telegram channel, and not necessarily passed them on to bad actors. Still a profound breech of trust.

Yes, the CIA really did give a US citizen, high school valedictorian and Yale graduate access.
May 26th, 2025 at 12:12:23 PM permalink
SOOPOO
Member since: Feb 19, 2014
Threads: 25
Posts: 5754
Quote: GenoDRPh
He posted them on a Telegram channel, and not necessarily passed them on to bad actors. Still a profound breech of trust.

Yes, the CIA really did give a US citizen, high school valedictorian and Yale graduate access.


And look what happened? His allegiance was not to our country. Who could have guessed?