Automation and cyborg law: To what extent do we “borrow” from machines?

“Debt” tends to carry a negative connotation—a word rooted in borrowing something and ultimately carrying responsibility for its repayment. But the concept and its capability carry meaningful dimensions, too. Debt can also mean enablement—for example, when owning a home and getting a mortgage, or launching a venture and getting start-up financing, or injecting capital into public needs, such as government bonds. The list goes on.

Across various contexts, the accumulation of debt can be a strategic tool. So what happens when the tipping point comes into play?

When it comes to technology, debt may lead to certain sociological costs of borrowing related to automation. At the base layer, “technical debt” is the programming concept describing the costs of taking shortcuts in development decisions and choosing the expedited approach over the longer and tougher, but ultimately cleaner option.

There is then an elevated concept of “intellectual debt,” which runs even deeper and asks what is sacrificed when insight into AI’s causal mechanisms is lost. It also allows you to explore what that means from a liability debt perspective. “Liability debt” (introduced in this article) is a concept meant to refer to the gap in personal responsibility that accompanies disproportionate deference to technology.

Looking at the increasing sophistication and ubiquity of automation from this perspective, it seems like a natural trajectory for the law is to evolve with technology. The tension and nuance of humanity and automation is being explored via the emergence of “cyborg law,” codifying the idea that the “law will have to accommodate the integration of technology into the human being.”

The word cyborg—a combination of “cybernetics” and “organism”—describes an emerging hybrid of machines and humanity. As the tech-enabled augmentation of daily life normalizes, society seems to become increasingly more comfortable with tech not only being a part of our lives but being a part of individuals in a deeply integral way.

Futuristic tech is already integrated in consumer products, such as implants beneath the skin that work as ID cards, as well as the field of neurotechnology, which involves tech that interacts with the human brain. Humanity already has access to wearable devices that can record brain activity in real time and tech that can be used to alter memories. Technology is constantly evolving and advancing.

Existing jurisprudence recognizes the rights of humans, but it falls short in recognizing, in a holistic way, the cyborgs people are becoming, as we've added “machine qualities” to our bodies and consciousness. In 2014, the Supreme Court ruled in Riley v. California that police officers may not search cell phones without a warrant since they have become such an intrinsic part of our daily life that they are, essentially, a part of our “human anatomy.”

If our legal system sees each of us with our phones as a person using a machine, this framing allows protections for one entity only (the person), but robust protections for one entity (people) may be nullified in the absence of matching protections for the other (technology), e.g., by protecting our phone data from surveillance. Perhaps rather than seeing a binary legal distinction—deeming someone or something either a person or property—the law can evolve to see more of a continuum where devices integrate with the personhood of their owner.

But in conjunction with focusing on rights, should the focus shift to responsibilities? As cyborg law evolves to protect the machine in our human-machine integration, for example privacy laws—particularly those regarding AI—perhaps we should also work to understand where the machine may have decision-making power instead of the human, and allow for an increasing gap in responsibility, ultimately giving rise to a debt for liability incurred but left unaccounted for.

In the emerging tech space, tech debt could be accumulated as a simple shortcut by using coding shortcuts for speed and necessitating a fix down the line, or strategically taking a shortcut for expediency to hit the mark on delivery, while thoughtfully reserving time for updates later on. Intellectual debt in the realm of AI can also be curbed through adherence to principles of explainability, accountability, and transparency to accurately understand the source of decisions being made.

Emerging governing and guiding schemes advise that AI should at the very least be fair, easy to understand, accountable, and should defer to human decision making; guidance that leading public and private organizations are working to uphold and progress globally.  

The coalescence of these principles implores the question of the mitigation in liability debt. In other words, where emerging tech increasingly integrates with our way of life and gets embedded in decision making, how do we properly account for the human element of responsibility when acting through the cyborg-like way in which we live today?

In the AI and machine-learning (ML) space, one potential legal and risk mitigation tool is the avoidance of “closed loop” systems—advanced analytics with decisional output that are programmed to act directly without any human oversight. Guardrails around closed-loop tech builds aim to ensure the safety and security of those using the technology and work to put the human in the driver’s seat. This is done by mandating human oversight and balancing any argument of liability avoidance by deferring to the machine.

Safety and security feels like the baseline consideration, however - almost like a white picket fence around a beautifully manicured lawn to delineate a territory. What happens when an intruder hops the fence, deer make their way through, or a fierce storm knocks some of the fencing over? A second layer of strategy, consideration, and tailored attention would be sought after to keep the home’s perimeter intact, comprised of a cross section of local personnel. The field of AI and automation is similar.

Considering oversight and similar precautions may be an essential initial line of defense for fair and explainable use of AI, but above that, strategic, thoughtful, and carefully tailored and curated techniques will likely be key in structuring an approach to responsible AI and ML automation. Trends in this direction span a variety of measures, including setting up a cross-disciplinary and independent oversight board to guide and advise technology development, focusing on building out robust AI ethics teams to set frameworks and prioritize “ethics scorecards,” committing to diversity in forming an AI review board for important depth in perspective, and creatively structuring cross-disciplinary AI ethics-review committees to focus separately on elements on the build and use of the proposed automation, to name a few. 

Our world is advancing quickly, and with risk and legal considerations, bodies of law and ethical principles often follow—rather than lead—trends in the latest tech. Thoughtful accumulation of debts, whether financial, technical, intellectual, or ethical, may be a mainstay, but reflection suggests that any sort of borrowing benefits from a discerning eye, a steady hand, an evolving perspective, and a commitment to revisiting first principles and structuring novel, thoughtful approaches to increasingly nuanced considerations.