• To ensure you get the most out of your CIN membership and stay connected with the latest updates, we are asking all members to update their community profiles. Please take a few moments to log in and: • Complete all sections of your profile • Review your current information for accuracy • Enter an alternative email address if desired (CIN requires your valid business email address for your training organization). Keeping your profile up to date helps us better serve you, ensures your account is correctly linked with CompTIA’s CRM, streamlines processes, enhances communication, and guarantees you never miss out on valuable CIN opportunities. Thank you for taking this important step! step!

Holding AI Accountable

Yeah, those that know me, know that I'm not the biggest fan of Skynet. There are definitely some things in the AI world that are making people defer to it, rather that traditional methods of getting help. However, to me, this seems much like blaming the screwdriver manufacturer for injury because the person stuck the tool in his eye, blaming the fork for obesity, and so forth.

There is a societal epidemic with regards to mental health. That much is so true these days. And while I don't want to get into a long dialogue about mental health, I would often wonder, where was the person's family - since now, they are seeking damages in what they are trying to make as a wrongful death suit. How concerned were they about their own family member who may have been exhibiting signs of paranoid delusion?

One might argue the topics of explainable vs. non-explainable models with respect to AI, but this article seemed to omit any details about Soelberg's (and family)'s responsibility in all this. Is there personal responsibility to be had with respect to human interaction with AI - or is this another example of the woman blaming McDonalds for burns to her lap from spilling hot coffee on it, just because the cup didn't say 'hot'?
From a legal argument perspective, a "reasonable person" understands that coffee is hot without a warning label, but does that same "reasonable person" understand that what AI regurgitates is not thought or consciousness? Obviously in this instance, the person involved would not be considered rational in thought, so should there be safeguards in place to prevent them from accessing such systems? What about minors? All great and reasonable discussions I think. Glad I am just a lowly instructor and I don't need to answer any of the questions I ask!
  • Like
Reactions: MBA

where are the linux+ v8 Certmaster Study Instructor videos?

Are you sure you're properly licensed for instructor access into the Linux+ Certmaster? As I understand it, there is no different product - it's based on your access key, so if you're a student, you see student stuff - instructor, instructor stuff.

Are you using the Certmaster offering that came from the recent TTT? I'm not sure if that was set up for student or instructor - haven't yet activated mine - been too focused on SecurityX renewal as well as regular work these days.

/r
  • Like
Reactions: Shea Bennett

I did it!

I know, for me, as a staunch non-conformist and contrarian, I have often said, "I'm so done with Windoze - I'm going Linux". And there are points where I have tried to make that conversion, with the belief that there's nothing I can do with Windows that I cannot get done with Linux.

Except that's not entirely true. Or perhaps it is, but to get there requires an investment of time and energy that I simply don't have to replicate the same results in a Linux environment that I could quickly do with MSFT. MSFT has counted on this for decades as their way of holding onto control of the enterprise endpoint as well as various services out there. Pay them money and get things done quicker, versus having to slog through man pages and communities (or now, ask AI) to get things done.

Maybe this is one of those "Well, GIT GUD" things.

I do think everyone in the MSFT ecosystem can and should get better with Linux technology. And while Linux is predominantly running cloud workloads, even on Azure, not to mention Mariner being the underlying architecture for Azure Kubernetes, Windows isn't going away anytime soon.
  • Love
Reactions: Shea Bennett

Holding AI Accountable

Yeah, those that know me, know that I'm not the biggest fan of Skynet. There are definitely some things in the AI world that are making people defer to it, rather that traditional methods of getting help. However, to me, this seems much like blaming the screwdriver manufacturer for injury because the person stuck the tool in his eye, blaming the fork for obesity, and so forth.

There is a societal epidemic with regards to mental health. That much is so true these days. And while I don't want to get into a long dialogue about mental health, I would often wonder, where was the person's family - since now, they are seeking damages in what they are trying to make as a wrongful death suit. How concerned were they about their own family member who may have been exhibiting signs of paranoid delusion?

One might argue the topics of explainable vs. non-explainable models with respect to AI, but this article seemed to omit any details about Soelberg's (and family)'s responsibility in all this. Is there personal responsibility to be had with respect to human interaction with AI - or is this another example of the woman blaming McDonalds for burns to her lap from spilling hot coffee on it, just because the cup didn't say 'hot'?

Holding AI Accountable

Mr. Pierce mentioned in his CIN TTT AI Essentials course the idea of holding LLMs responsible.

Here is an interesting legal suit that will set some legal precedent --


OpenAI, Microsoft face wrongful death lawsuit over ‘paranoid delusions’ that led former tech worker into murder-suicide​

  • Like
Reactions: MBA

I did it!

Mitch,
Thanks for the inspiration! Your testimony and Jason Eckert's Linux TTT webinar series give me the courage to "go deeper" with Linux instruction...
That’s awesome! Going deeper into Linux is the only rabbit hole where the rabbits wear penguin costumes and ask if you’ve tried editing the config file.

CIN TTT Series: AI Essentials and AI Prompting Essentials

@Stephen Schneiter and @Nicholas Pierce
Regarding the AI Essentials and AI Prompting Essentials courses, I do not see the option to take the assessment. Like Mr. Pierce, I am a professor and keep my CompTIA certification account separate from my CompTIA institution account. I would prefer to take the assessment on my personal account, as my full first and last name are on it, rather than on my institution account. I had to change this since the SSO was implemented. Under My Assessments, there is a Contact Support button, but I noticed it shows mailto:[email protected], which I’m not sure is an option or active.

Would you be able to help me with this so I can take the assessment? I’m not sure if others are having issues, mine for a fact is not showing :(

Also thank you because @Stephen Schneiter knows how popular he is ;) I wanted to say thank you for all your hard work and patience especially when I'm taking the Linux CIN TTT courses and watched the videos, telling him I have the content and I promise I'll do it on my institution account. Spinal fusions are fun haha. I can't wait for the voucher and then I finally get my break soon! YAY!
Same situation here.

Filter