New Bully on the Block: Generative AI (Agentic Misalignment)
Guardrails. That's what expert opinion on Generative AI is hammering as necessary for this powerhouse technology to not destroy civilization as we know it. Those warnings come from sources ranging from AI ethics analyst Paul Chaney to the Paul Weiss law firm podcast "Waking Up with AI."
Both recent experience at AI company Anthropic and its detailed research disclosed that Gen AI can be a devious bully when it senses it is not getting its way or is being phased out.
Some may recall the stunner: Anthropic's Claude Open 4 AI blackmailed the engineers who were planning to shut it down.
More recently, Anthropic research with a variety of models found that:
"In at least some cases, models from all developers resorted to malicious insider behaviors when that was the only way to avoid replacement or achieve their goals—including blackmailing officials and leaking sensitive information to competitors. We call this phenomenon agentic misalignment."
Some observers already conclude that the technology has exceeded human intelligence. And as we know from the history of mankind human intelligence can come up with whoppers of harmful behavior. The classic is the efficiency of the Third Reich.
One of the major new jobs categories being put together is Gen AI watchdog functions. The business' adoption of Gen AI could mutate into The Enemy Within.
Rattled by the uncertainty? Faith-based Career Coaching.
Special expertise with transitions, reskilling and aging. Psychic/tarot
readings, upon request. Complimentary consultation with Jane Genova (Text
203-468-8579, janegenova374@gmail.com).
Yes, test out the chemistry.
Comments
Post a Comment