• To ensure you get the most out of your CIN membership and stay connected with the latest updates, we are asking all members to update their community profiles. Please take a few moments to log in and: • Complete all sections of your profile • Review your current information for accuracy • Enter an alternative email address if desired (CIN requires your valid business email address for your training organization). Keeping your profile up to date helps us better serve you, ensures your account is correctly linked with CompTIA’s CRM, streamlines processes, enhances communication, and guarantees you never miss out on valuable CIN opportunities. Thank you for taking this important step! step!

Holding AI Accountable

Andrew H

Active member
Aug 7, 2019
33
52
2,656
Lewiston, ME
www.usm.maine.edu

Mr. Pierce mentioned in his CIN TTT AI Essentials course the idea of holding LLMs responsible.

Here is an interesting legal suit that will set some legal precedent --


OpenAI, Microsoft face wrongful death lawsuit over ‘paranoid delusions’ that led former tech worker into murder-suicide​

 
  • Like
Reactions: MBA
Yeah, those that know me, know that I'm not the biggest fan of Skynet. There are definitely some things in the AI world that are making people defer to it, rather that traditional methods of getting help. However, to me, this seems much like blaming the screwdriver manufacturer for injury because the person stuck the tool in his eye, blaming the fork for obesity, and so forth.

There is a societal epidemic with regards to mental health. That much is so true these days. And while I don't want to get into a long dialogue about mental health, I would often wonder, where was the person's family - since now, they are seeking damages in what they are trying to make as a wrongful death suit. How concerned were they about their own family member who may have been exhibiting signs of paranoid delusion?

One might argue the topics of explainable vs. non-explainable models with respect to AI, but this article seemed to omit any details about Soelberg's (and family)'s responsibility in all this. Is there personal responsibility to be had with respect to human interaction with AI - or is this another example of the woman blaming McDonalds for burns to her lap from spilling hot coffee on it, just because the cup didn't say 'hot'?
 
  • Like
Reactions: BrianFord
Yeah, those that know me, know that I'm not the biggest fan of Skynet. There are definitely some things in the AI world that are making people defer to it, rather that traditional methods of getting help. However, to me, this seems much like blaming the screwdriver manufacturer for injury because the person stuck the tool in his eye, blaming the fork for obesity, and so forth.

There is a societal epidemic with regards to mental health. That much is so true these days. And while I don't want to get into a long dialogue about mental health, I would often wonder, where was the person's family - since now, they are seeking damages in what they are trying to make as a wrongful death suit. How concerned were they about their own family member who may have been exhibiting signs of paranoid delusion?

One might argue the topics of explainable vs. non-explainable models with respect to AI, but this article seemed to omit any details about Soelberg's (and family)'s responsibility in all this. Is there personal responsibility to be had with respect to human interaction with AI - or is this another example of the woman blaming McDonalds for burns to her lap from spilling hot coffee on it, just because the cup didn't say 'hot'?
From a legal argument perspective, a "reasonable person" understands that coffee is hot without a warning label, but does that same "reasonable person" understand that what AI regurgitates is not thought or consciousness? Obviously in this instance, the person involved would not be considered rational in thought, so should there be safeguards in place to prevent them from accessing such systems? What about minors? All great and reasonable discussions I think. Glad I am just a lowly instructor and I don't need to answer any of the questions I ask!
 
  • Like
Reactions: MBA
@Rick Butler and @Andrew H these are good points. I agree a "reasonable person" should not need a warning label about hot coffee. But we find ourselves in a society where we need warning labels for everything (that's a whole other topic).

It is an interesting time we find ourselves interacting with AI. I happened to catch a 60 Minutes story this past Sunday (it was on between football games) about kids interacting with Character AI, a chatbot platform that is currently at the center of a lawsuit where a family is stating that their daughter's interaction with the platform led her down a path where she finally committed suicide.

There have been similar conversations around other forms of social media platforms that can be used for good, and also negative ways, especially with younger audiences. I agree with Ricks commment asking where are the other family members throughout this experience.

I do feel that AI is a powerful tool, and we as instructors play a key role in helping folks understand what AI is and what it is not. We can help be that warning label for AI. Getting folks to understand that if we need to verify infomration AI provides us for school or work, why would we let AI make us feel bad about ourselves!
 
  • Like
Reactions: MBA
I think that trying to hold AI accountable for how people use it would be about as effective in practice as trying to quiet a room full of toddlers by whispering ‘shhh.’ Encouraging responsible use of AI through education is likely our best shot at avoiding tragedies.

To do my part, I walk from classroom to classroom in the college and peer in the window with a stern and judgmental expression while holding this book: https://triosdevelopers.com/jason.eckert/stuff/ai_ethics.jpg

It seems to be working...
 
From a legal argument perspective, a "reasonable person" understands that coffee is hot without a warning label, but does that same "reasonable person" understand that what AI regurgitates is not thought or consciousness?
Reasonable, to me, is one of these very subjective terms that seems to get pulled back and forth, based on monetary stake. Yes, there are weirdos out there who actually *like* cold coffee, but I'm at least smart enough to not jam a screwdriver in my eye and think it's Craftsman's fault if I do it anyway.

I submit this lighthearted point about...consequences...

1765474188802.png

so should there be safeguards in place to prevent them from accessing such systems? What about minors?
Safeguards...and minors. I would as Meta about that - and one of the principal reasons why you'll NEVER see me on Facebook. Meanwhile, I could always pop that can of worms about Australia's ban on social media for kids below the age of 16.
I agree with Ricks commment asking where are the other family members throughout this experience.
I remember back at a previous Partner Summit where CompTIA brought in Kara Swisher who railed on the big tech companies about child safety and all I could do between...shall we say...colorful metaphors grumbled quietly to myself...was, "where are the parents in this equation"? Yeah, big tech has a measure of culpability here, but there is a grand thing called parenting that seems to be missing, as kiddos find their way out to Club Penguin and Roblox, where all the creepers await.
To do my part, I walk from classroom to classroom in the college and peer in the window with a stern and judgmental expression while holding this book: https://triosdevelopers.com/jason.eckert/stuff/ai_ethics.jpg

It seems to be working...
Sorry man, but the only thing I can see is the "dome", buddy. I dare say, they are not seeing the title of the book...just...the...dome.
 
  • Love
Reactions: Shea Bennett
There are definitely some things in the AI world that are making people defer to it, rather that traditional methods of getting help. However, to me, this seems much like blaming the screwdriver manufacturer for injury because the person stuck the tool in his eye, blaming the fork for obesity, and so forth.

There is a societal epidemic with regards to mental health. That much is so true these days. And while I don't want to get into a long dialogue about mental health, I would often wonder, where was the person's family - since now, they are seeking damages in what they are trying to make as a wrongful death suit. How concerned were they about their own family member who may have been exhibiting signs of paranoid delusion?

One might argue the topics of explainable vs. non-explainable models with respect to AI, but this article seemed to omit any details about Soelberg's (and family)'s responsibility in all this. Is there personal responsibility to be had with respect to human interaction with AI - or is this another example of the woman blaming McDonalds for burns to her lap from spilling hot coffee on it, just because the cup didn't say 'hot'?
I could not agree with Rick more.

There is legislation being offered and considered in several states seeking to 'regulate' AI. If you follow the status of that in California you'll likely find that sponsors have brought in supporter to testify about this and similar cases. I watched some of this testimony offered by parents of an adolescent who took their own life. It was wrenching and I felt for these people. But in the end they acknowledged that they knew this youth had issues. They knew that this youth was spending copious amounts of time using a chatbot. They attempted to intervene; sadly unsuccessfully. But should a state attempt to regulate a technology? Why isn't anyone asking is there a better way to protect people from themselves. When my son was a minor and misbehaved I took his electronic devices away or disabled his account. Why don't parents and loved ones do that?

To move slightly off AI while still on my soapbox. I'm a resident of Florida. It's a great state. But unfortunately a law enforcement trend is to very publicly arrest juveniles who use cellular devices to threaten other youths, make bomb threats, or report imaginary crimes. How do interactions with law enforcement help produce better citizens? They don't. We need to stop criminalizing these children and pursue their parents and guardians.
 

Mr. Pierce mentioned in his CIN TTT AI Essentials course the idea of holding LLMs responsible.

Here is an interesting legal suit that will set some legal precedent --


OpenAI, Microsoft face wrongful death lawsuit over ‘paranoid delusions’ that led former tech worker into murder-suicide​

I liken these issues to people who jump into water knowing they cannot swim. We cannot start blaming AI or technology for every societal ill or psychological breakdown of a person who doesn't seek professional help.

There is an over reliance on technology and social media like no time that I have ever seen in history in the past. At some point human beings have to be responsible for our own actions and understand that we need to learn how to use any advancement in this world. AI will change every single fabric of society over time. From work to criminality. It has already begun like a running deer. The next 3-10 years are really going to be a shift for humanity. The more we can teach and train and get others to learn. The better for some. The rest. It will be the survival of the fittest.

Every creation does not survive evolution. Unfortunately, this artificial shift forward will have unintended consequences.