Machine Learning Bias: An Existential Risk

Lars Wood, CEO of QAI Swarm and Head of LARSX RESEARCH, a company based in United States deploying patent pending Quantum Artificial Intelligence (QAI) cognitive reactor collective intelligence for crypto-currency blockchain solo and pool mining, participates in Risk Roundup to discuss “Machine Learning Bias: An Existential Risk”.

Overview

The birth of the world wide web and the invention of the internet was only a few decades ago. The magnitude of that invention and the advancements that have since occurred in computing and communications are not only connecting humans across nations but fundamentally changing their lives as we speak.

Since each new idea, innovation and technology brings us transformative potential, there is a growing belief that the on-going technology transformation in the cyberspace would play a central role in increasing equality and fairness and bring us power to transform the world in not only cyberspace, but also geospace and space (CGS).

While cyberspace can bring us a force for equality, the dawn of artificial intelligence (AI) is also giving us a promise of a leveled playing field across CGS, where everyone, irrespective of race, religion, class or connections would have an equal opportunity in not only education; but employment, entrepreneurship, survival, success, satisfaction and a shot at prosperity.

However, as machine intelligence is becoming more ubiquitous, and systems are being controlled in not only cyberspace but also geospace and space; the emergence of evidence and data of biased algorithms is leading to growing concerns of algorithms making judgement calls. So, how are machine learning algorithms becoming biased?

Let us evaluate this further:

The digital global age gave us a hope that the on-going technology transformation would bring the much-needed equality, transparency, fairness and a leveled playing field. As artificial intelligence strives to be dependable digital data decoder, numerous AI based tools, technologies and processes are trickling into everything from driving directions to dream jobs, loan applications to college applications and so much more.

  • While the potential for technology transformation across CGS remains, is the on-going technology transformation bringing us a tool for equality and fairness?
  • Amidst the growing visible signs that the digital global age has decided to depend on the Artificial Intelligence (AI) based digital data information infrastructure, do we have dependable algorithms?
  • Since AI systems learn from social data that reflects human history, with all its biases and prejudices intact, can algorithms unintentionally boost those biases?
  • Is it possible for intelligent machines to be objective? Are intelligent machines defined and designed to be inherently objective?
  • How are the machine learning systems designed today?
  • Can we trust AI based supervised learning systems today?
  • What is the potential for bias to creep into AI and where does the bias originate? What are the sources?
  • How do we ensure that names, religion, origin, color of the skin or class are not injected into machine learning algorithms to create a bias and tilt the playing field?
  • How do we find bias in the mind of machines? Is there any test?
  • How widespread is the machine learning based decision-making technology? Is it even possible to know how widely adopted AI is now?
  • How bad is the bias problem? Do we have any data that highlights the bias or tilted playing field in cyberspace?
  • What impact will be there because of biased algorithms?
  • Who stands to lose out the most with the bias in algorithms?
  • Are there laws to protect against discrimination due to algorithmic decision-making?
  • Are there effective rules or regulations around algorithms, that focuses on the algorithm accountability?
  • How to measure performance of this algorithms? Are there any effective benchmarks?
  • Is there any organization that tests different versions of algorithms from different sources and rates them for public use?
  • Do we have effective tools to test algorithms?
  • What are the implications for nations: its government, industries, organizations and academia (NGIOA)?

Conclusion

Identifying, isolating and eliminating the biases that cause Artificial Intelligence to take decisions that either endanger human life or discriminate, is one of the biggest challenges facing machine-learning developers today.

We at Risk Group call attention to risks impacting humanity at all levels—biases that brings inequality, emphasize them, raising awareness of their existence, educating individuals and entities across NGIOA, and making every effort to correct them as best as we can. By identifying the problem and raising awareness for it, we take the first step in beginning to address it. Now is the time to talk about the risks of Biased Algorithms!

For more please watch the Risk Roundup Webcast or hear Risk Roundup Podcast

About the Guest

Lars Wood, is the CEO of QAI Swarm and heads LARSX RESEARCH, a United States company deploying patent pending Quantum Artificial Intelligence (QAI) cognitive reactor collective intelligence for cryptocurrency blockchain solo and pool mining. His innovative career spans patented ANN algorithms, advanced microelectronics, thermonuclear and quantum physics, supercomputing machines, condensed matter physics, superconducting electronics, subatomic matter visualizations, “Smart Molecule” drug discovery where molecules use supramolecular forces to make decisions about their biological activity. His patented ANN science were first to solve a non DARPA large scale DoD military challenge thought to be impossible generating hundreds of millions of dollars for GTE (GD/Verizon). CIA GoTo for unyielding agency technical challenges. He is also the
Founder and director of the GTE-GS award winning Advanced Machine Intelligence Laboratory. Granted 8 ANN foundational patents, received highest research award in competition with the founders of ML and AI. He is also a Visiting scientist at MIT, JPL, CIA, SCF, XILINX, SFI, DOS, FBI, LANL, NSA, NRO, DISA, DIA, Whitehouse.

About LARX Research

LARSX RESEARCH, is a Montana company deploying patent pending Quantum Artificial Intelligence (QAI) cognitive reactor collective intelligence for cryptocurrency blockchain solo and pool mining. QAI collective intelligence orchestrates current brute force miners to dramatically reduce the computations necessary for successful mining of the blockchain. QAI is a high-performance GPU and FPGA computing fine grained unsupervised reinforcement machine learning platform implemented using quantum signaling. This results in optimal blockchain mining, requires no training sets and is unbiased in its results. In comparison to current blockchain brute force data independent mining, QAI implements much more efficient optimal data dependent “Learn to Hash” mining.

About the Host of Risk Roundup
Jayshree Pandya (née Bhatt) is a visionary leader, who is working passionately with imagination, insight and boldness to achieve Global Peace through Risk Management. It is her strong belief that collaboration within, between and across nations: its government, industries, organizations and academia (NGIOA) will be mutually beneficial to all—for not only in the identification and understanding of critical risks facing one nation, but also for managing the interconnected and interdependent risks facing all nations. She calls on nations to build a shared sense of identity and purpose, for how the Security Centric Integrated Cyberspace, Geospace and Space Risk Management framework is structured will determine the survival and success of nations in the Digital Global Age. She sees the big picture, thinks strategically and works with the power of intentionality and alignment for a higher purpose—for her eyes are not just on the near at hand but on the future of humanity!

At Risk Group, Jayshree is driving the thought leadership on “Strategic Security Risk Intelligence”! She believes that Cyberspace, Geospace or Space (CGS) cannot be secured if NGIOA works in silo within and across its geographical boundaries. As security requires an integrated NGIOA approach with a common language, she has recently launched Cyber-Security, Geo-Security and Space- Security Risk Research Centers that will merge the boundaries of Geo-Security, Cyber-Security and Space-Security.

In 2015, Jayshree launched “Risk Roundup” an Integrated Cyber-Security, Geo-Security and Space-Security Risk Dialogue. Risk Roundup Webcast/Podcast are available on YouTube, iTunes, Google Play, Risk Group website, and professional social media.

Jayshree’s inaugural book, The Global Age: NGIOA @ Risk, was published by Springer in 2012.

About Risk Roundup

Risk Roundup: Webcast/Podcast, a global initiative launched by Risk Group, is an integrated cyberspace, geospace, and space (CGS) security risk dialogue for individuals and entities across nations: its government, industries, organizations and academia (NGIOA). Risk Roundup is directly trying to promote and enhance CGS risk intelligence by collective participation of decision makers from across NGIOA.

Risk Roundup is released in both audio (Podcast) and video (Webcast) format and is available for subscription at (Risk Group WebsiteiTunesGoogle PlayStitcher RadioAndroid, and Risk Group Professional Social Media).

About Risk Group

Risk Group is an integrated cyberspace, geospace and space (CGS) security risk research organization. Risk Group is on a mission to epitomize collective risk intelligence of nations: its government, industries, organizations and academia (NGIOA) as the synergistic intersection among independent as well as interconnected and interdependent CGS security risks to help achieve an effective process for better collective security risk intelligence, management and governance than silo and fragmented security risk approach that we have across nations today. Risk Group is determined to engage the collective NGIOA risk intelligence capability to manage CGS security risks—risks impacting individuals and entities across NGIOA. Having a collective NGIOA risk intelligence capability will be transformative for not only achieving CGS security but also global peace.

Risk Group believes that risk management, security and peace walk together hand in hand. Though security is related to management of threats and peace to the management of conflict, risk management is related to management of security vulnerabilities as well as management of conflict, and it is not possible to conceive any one of the three without the existence of the other two. All three concepts feed into each other. Risk Group believes that the security we build for ourselves is precarious and uncertain until it is secured for everyone across nations. Tradition becomes our security-so if we build a culture of managing risks effectively it will lead us to security and security will lead us to peace!

Copyright Risk Group LLC. All Rights Reserved

Leave a Reply