Safe, Secure, and Trustworthy Artificial Intelligence But Not For You

November 02, 2023 — Jt Spratley

In 2023, the Biden-Harris administration quietly released an Artificial Intelligence (AI) Bill of Rights and executive order on securing AI. Both are quite long reads. Although I share my thoughts on them below, I recommend reading both if you're interested in the topic.

AI and Transparency with the Public

"Require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government."

That last part is most alarming. AI developers are expected to share their work only with the feds, not citizens who may be concerned with excessive big data gathering, usage of AI for unknown situations, or discrimination.

"Irresponsible uses of AI can lead to and deepen discrimination, bias, and other abuses in justice, healthcare, and housing."

It already does. Facial scanners, finger scanners, hiring managers' resume filtering tools, and search engine results all have biases that been shown to manipulate and discriminate against others. Facebook was called out for how they delivered info about the Clinton versus Trump election time frame. I've shared other examples in my post reviewing the Vice-President Kamala Harris interview at Hampton University.

An interesting example of this happened in October 2023 when Ars Technica of all platforms released an article asking why YouTube hadn't taken down Cynthia G.'s video urging women to abort Black, male babies. Brother Oshay Duke Jackson argued that Google kept the video up because it garnered so many views, which means more ad revenue and profits for the company. One challenge to what he says in his video is that we don't know how many times that video or account was flagged by black men, or others for that matter. We do know that when you watch certain types of content, YouTube is more likely to recommend similar content. Feminism, misandry, and hembrism are all on the rise. This brings to light the increasingly common tactic of companies, when subjected to peer pressure about foul actions, hiding behind the claim, "that's the AI's fault and it should've notified us" in an attempt to deflect accountability. Food for thought.

Equality, Justice, and AI

Advancing Equity and Civil Rights
"Ensure fairness throughout the criminal justice system by developing best practices on the use of AI in sentencing, parole and probation, pretrial release and detention, risk assessments, surveillance, crime forecasting and predictive policing, and forensic analysis."

I continue to despise the term "equity" as, unlike "equality," it doesn't address the root issue. Just like private corporations, police officers and judges can use AI as a scapegoat when making bad decisions or trying to force a decision for malicious purposes. This will likely lead to manual checks of AI models becoming a larger part of digital forensics. For decades, cops have justified police brutality against innocent Black men with the simple claim that "he's Black." Why wouldn't AI be used in the similar fashion.

Big Data Collection and Inquires

"Evaluate how agencies collect and use commercially available information—including information they procure from data brokers—and strengthen privacy guidance for federal agencies to account for AI risks."

We Americans need to normalize discussing and exercising privacy laws. Then-president Barack Obama signed the Freedom of Information Act (FOIA) Improvement Act of 2016 which birthed "a consolidated online request portal that allows a member of the public to submit a request for records under subsection (a) to any agency from a single website." General Data Protection Regulation (GDPR) has been in effect since 2018. California implemented their California Consumer Privacy Act (CCPA) in 2019. Have questions about how a federal organization is handling your data? Make a FOIA request at Curious what data Twitter or some of these private companies store from you activity. Check their privacy policy and make a GDPR data request. If you're in California, use that to prevent giving companies more information than they need.

Training AI Professionals

"Develop principles and best practices to mitigate the harms and maximize the benefits of AI for workers..."

This statement doesn't address AI training to improve productivity, leading me to believe that this is more about shifting employees' responsibilities as AI is able to do more with better accuracy.

"Use existing authorities to expand the ability of highly skilled immigrants and non-immigrants with expertise in critical areas to study, stay, and work in the United States by modernizing and streamlining visa criteria, interviews, and reviews."

The inclusion of immigrants in this statement, instead of a singular focus on our own, doesn't sit well with me with all the money being funneled into thrusting illegal immigrants into middle class citizens in Chicago and New York City, ignoring the Black and homeless to prevent the reemergence of the "Chicago Black belt." Between the money for illegals' facilities and supporting Ukraine against Russia, there's no money for Blacks' reparations, though.

"Accelerate the rapid hiring of AI professionals as part of a government-wide AI talent surge led by the Office of Personnel Management, U.S. Digital Service, U.S. Digital Corps, and Presidential Innovation Fellowship. Agencies will provide AI training for employees at all levels in relevant fields."

AI training will be provided for federal employees but not privately owned businesses. Makes sense. For non-federal employees and business owners, I highly recommend checking out the free AI training courses on IBM SkillsBuild. Students and alumni at historically Black colleges and universities (HBCUs) must take advantage of the IBM HBCU Cybersecurity Initiative. Learn basic scripting and hashing in your command-line interface (CLI), and stop sharing so much info. You'll need a fundamental understanding of AI if this executive order is anything to go by. It states nothing about transparency with US citizens, despite the fact that the AI Bill of Rights released earlier in 2023 clearly state:

"...including reporting of steps taken to mitigate potential harms, should be performed and the results made public whenever possible."

Tags: cybersecurity

Comments? Tweet