The tech giant is evaluating tools that would use artificial intelligence to perform tasks that some of its researchers have said should be avoided.
Google’s A.I. safety experts had said in December that users could experience “diminished health and well-being” and a “loss of agency” if they took life advice from A.I. They had added that some users who grew too dependent on the technology could think it was sentient. And in March, when Google launched Bard, it said the chatbot was barred from giving medical, financial or legal advice. Bard shares mental health resources with users who say they are experiencing mental distress.
Why in the pissity-fuck would I take life advice from Google, Google applications or an AI trained by Google.
That is so far out of the question for what I find reasonable
There’s a lot of dumdums out there who will fall for this shit.
Get rich quick! Buy my $600 box set and you’ll be on your way to financial independence
I think this is more like a precaution. They are not making a service specifically for this, but probably an update to Bard in the case a user asks those questions. I think its reasonable, but must be done and released in the most curated and well developed state possible to prevent another story that already happened. (That suicide hotline that suddenly went full AI, to then backtrack because it responded badly).