tech

The use of face recognition technology is expanding exponentially right now.

(© Prostock-studio/Adobe)


[Editor's Note: This essay is in response to our current Big Question, which we posed to experts with different perspectives: "Do you think the use of facial recognition technology by the police or government should be banned? If so, why? If not, what limits, if any, should be placed on its use?"]

Opposing facial recognition technology has become an article of faith for civil libertarians. Many who supported the bans in cities like San Francisco and Oakland have declared the technology to be inherently racist and abusive.

The greatest danger would be to categorically oppose this technology and pretend that it will simply go away.

I have spent my career as a criminal defense attorney and a civil libertarian -- and I do not fear it. Indeed, I see it as positive so long as it is appropriately regulated and controlled.

We are living in the beginning of a biometric age, where technology uses our physical or biological characteristics for a variety of products and services. It holds great promises as well as great risks. The greatest danger, however, would be to categorically oppose this technology and pretend that it will simply go away.

This is an age driven as much by consumer as it is government demand. Living in denial may be emotionally appealing, but it will only hasten the creation of post-privacy world. If we do not address this emerging technology, movements in public will increasingly result in instant recognition and even tracking. It is the type of fish-bowl society that strips away any expectation of privacy in our interactions and associations.

The biometrics field is expanding exponentially, largely due to the popularity of consumer products using facial recognition technology (FRT) -- from the iPhone program to shopping ones that recognize customers.

But the privacy community is losing this battle because it is using the privacy rationales and doctrines forged in the earlier electronic surveillance periods. Just as generals are often accused of planning to fight the last war, civil libertarians can sometimes cling to past models despite their decreasing relevance in the current world.

I see FRT as having positive implications that are worth pursuing. When properly used, biometrics can actually enhance privacy interests and even reduce racial profiling by reducing false arrests and the warrantless "patdowns" allowed by the Supreme Court. Bans not only deny police a technology widely used by businesses, but return police to the highly flawed default of "eye balling" suspects -- a system with a considerably higher error rate than top FRT programs.

Officers are often wrong and stop a great number of suspects in the hopes of finding a wanted felon.

A study in Australia showed that passport officers who had taken photographs of subjects in ideal conditions nonetheless experienced high error rates when identifying them shortly afterward, including 14 percent false acceptance rates. Currently, officers stop suspects based on their memory from seeing a photograph days or weeks earlier. They are often wrong and stop a great number of suspects in the hopes of finding a wanted felon. The best FRT programs achieve an astonishing accuracy rate, though real-world implementation has challenges that must be addressed.

One legitimate concern raised in early studies showed higher error rates in recognitions for certain groups, particularly African American women. An MIT study finding that error rate prompted major improvements in the algorithms as well as training changes to greatly reduce the frequency of errors. The issue remains a concern, but there is nothing inherently racist in algorithms. These are a set of computer instructions that isolate and process with the parameters and conditions set by creators.

To be sure, there is room for improvement in some algorithms. Tests performed by the American Civil Liberties Union (ACLU) reportedly showed only an 80 percent accuracy rate in comparing mug shots to pictures of members of Congress when using Amazon's "Rekognition" system. It recently showed the same 80 percent rate in doing the same comparison to members of the California legislators.

However, different algorithms are available with differing levels of performance. Moreover, these products can be set with a lower discrimination level. The fact is that the top algorithms tested by the National Institute of Standards and Technology showed that their accuracy rate is greater than 99 percent.

The greatest threat of biometric technologies is to democratic values.

Assuming a top-performing algorithm is used, the result could be highly beneficial for civil liberties as opposed to the alternative of "eye balling" suspects. Consider the Boston Bombing where police declared a "containment zone" and forced families into the street with their hands in the air.

The suspect, Dzhokhar Tsarnaev, moved around Boston and was ultimately found outside the "containment zone" once authorities abandoned near martial law. He was caught on some surveillance systems but not identified. FRT can help law enforcement avoid time-consuming area searches and the questionable practice of forcing people out of their homes to physically examine them.

If we are to avoid a post-privacy world, we will have to redefine what we are trying to protect and reconceive how we hope to protect it. In my view, the greatest threat of biometric technologies is to democratic values. Authoritarian nations like China have made huge investments into FRT precisely because they know that the threat of recognition in public deters citizens from associating or interacting with protesters or dissidents. Recognition changes conduct. That chilling effect is what we have the worry about the most.

Conventional privacy doctrines do not offer much protection. The very concept of "public privacy" is treated as something of an oxymoron by courts. Public acts and associations are treated as lacking any reasonable expectation of privacy. In the same vein, the right to anonymity is not a strong avenue for protection. We are not living in an anonymous world anymore.

Consumers want products like FaceFind, which link their images with others across social media. They like "frictionless" transactions and authentications using faceprints. Despite the hyperbole in places like San Francisco, civil libertarians will not succeed in getting that cat to walk backwards.

The basis for biometric privacy protection should not be focused on anonymity, but rather obscurity. You will be increasingly subject to transparency-forcing technology, but we can legislatively mandate ways of obscuring that information. That is the objective of the Biometric Privacy Act that I have proposed in recent research. However, no such comprehensive legislation has passed through Congress.

The ability to spot fraudulent entries at airports or recognizing a felon in flight has obvious benefits for all citizens.

We also need to recognize that FRT has many beneficial uses. Biometric guns can reduce accidents and criminals' conduct. New authentications using FRT and other biometric programs could reduce identity theft.

And, yes, FRT could help protect against unnecessary police stops or false arrests. Finally, and not insignificantly, this technology could stop serious crimes, from terrorist attacks to the capturing of dangerous felons. The ability to spot fraudulent entries at airports or recognizing a felon in flight has obvious benefits for all citizens.

We can live and thrive in a biometric era. However, we will need to bring together civil libertarians with business and government experts if we are going to control this technology rather than have it control us.

[Editor's Note: Read the opposite perspective here.]

Jonathan Turley
Professor Jonathan Turley is a nationally recognized legal scholar who has written extensively in areas ranging from constitutional law to legal theory to tort law. He holds the prestigious Shapiro Chair for Public Interest Law at the George Washington University Law School. In addition to his extensive publications, he has served as counsel in some of the most notable cases in the last two decades including the representation of whistleblowers, military personnel, former cabinet members, judges, members of Congress, and a wide range of other clients. Professor Turley is a frequent witness before Congress on constitutional and statutory issues. He is also a nationally recognized legal commentator and columnist as well as a television commentator. He was ranked as 38th in the top 100 most cited “public intellectuals” (and second most cited law professor) in a study by Judge Richard Posner. Professor Turley received his B.A. at the University of Chicago and his J.D. at Northwestern. In 2008, he was given an honorary Doctorate of Law from John Marshall Law School for his contributions to civil liberties and the public interest.

Some experts worry that facial recognition technology is a dangerous enough threat to our basic rights that it should be entirely banned from police and government use.

(Courtesy: Fight for the Future)


[Editor's Note: This essay is in response to our current Big Question, which we posed to experts with different perspectives: "Do you think the use of facial recognition technology by the police or government should be banned? If so, why? If not, what limits, if any, should be placed on its use?"]

In a surprise appearance at the tail end of Amazon's much-hyped annual product event last month, CEO Jeff Bezos casually told reporters that his company is writing its own facial recognition legislation.

The use of computer algorithms to analyze massive databases of footage and photographs could render human privacy extinct.

It seems that when you're the wealthiest human alive, there's nothing strange about your company––the largest in the world profiting from the spread of face surveillance technology––writing the rules that govern it.

But if lawmakers and advocates fall into Silicon Valley's trap of "regulating" facial recognition and other forms of invasive biometric surveillance, that's exactly what will happen.

Industry-friendly regulations won't fix the dangers inherent in widespread use of face scanning software, whether it's deployed by governments or for commercial purposes. The use of this technology in public places and for surveillance purposes should be banned outright, and its use by private companies and individuals should be severely restricted. As artificial intelligence expert Luke Stark wrote, it's dangerous enough that it should be outlawed for "almost all practical purposes."

Like biological or nuclear weapons, facial recognition poses such a profound threat to the future of humanity and our basic rights that any potential benefits are far outweighed by the inevitable harms.

We live in cities and towns with an exponentially growing number of always-on cameras, installed in everything from cars to children's toys to Amazon's police-friendly doorbells. The use of computer algorithms to analyze massive databases of footage and photographs could render human privacy extinct. It's a world where nearly everything we do, everywhere we go, everyone we associate with, and everything we buy — or look at and even think of buying — is recorded and can be tracked and analyzed at a mass scale for unimaginably awful purposes.

Biometric tracking enables the automated and pervasive monitoring of an entire population. There's ample evidence that this type of dragnet mass data collection and analysis is not useful for public safety, but it's perfect for oppression and social control.

Law enforcement defenders of facial recognition often state that the technology simply lets them do what they would be doing anyway: compare footage or photos against mug shots, drivers licenses, or other databases, but faster. And they're not wrong. But the speed and automation enabled by artificial intelligence-powered surveillance fundamentally changes the impact of that surveillance on our society. Being able to do something exponentially faster, and using significantly less human and financial resources, alters the nature of that thing. The Fourth Amendment becomes meaningless in a world where private companies record everything we do and provide governments with easy tools to request and analyze footage from a growing, privately owned, panopticon.

Tech giants like Microsoft and Amazon insist that facial recognition will be a lucrative boon for humanity, as long as there are proper safeguards in place. This disingenuous call for regulation is straight out of the same lobbying playbook that telecom companies have used to attack net neutrality and Silicon Valley has used to scuttle meaningful data privacy legislation. Companies are calling for regulation because they want their corporate lawyers and lobbyists to help write the rules of the road, to ensure those rules are friendly to their business models. They're trying to skip the debate about what role, if any, technology this uniquely dangerous should play in a free and open society. They want to rush ahead to the discussion about how we roll it out.

We need spaces that are free from government and societal intrusion in order to advance as a civilization.

Facial recognition is spreading very quickly. But backlash is growing too. Several cities have already banned government entities, including police and schools, from using biometric surveillance. Others have local ordinances in the works, and there's state legislation brewing in Michigan, Massachusetts, Utah, and California. Meanwhile, there is growing bipartisan agreement in U.S. Congress to rein in government use of facial recognition. We've also seen significant backlash to facial recognition growing in the U.K., within the European Parliament, and in Sweden, which recently banned its use in schools following a fine under the General Data Protection Regulation (GDPR).

At least two frontrunners in the 2020 presidential campaign have backed a ban on law enforcement use of facial recognition. Many of the largest music festivals in the world responded to Fight for the Future's campaign and committed to not use facial recognition technology on music fans.

There has been widespread reporting on the fact that existing facial recognition algorithms exhibit systemic racial and gender bias, and are more likely to misidentify people with darker skin, or who are not perceived by a computer to be a white man. Critics are right to highlight this algorithmic bias. Facial recognition is being used by law enforcement in cities like Detroit right now, and the racial bias baked into that software is doing harm. It's exacerbating existing forms of racial profiling and discrimination in everything from public housing to the criminal justice system.

But the companies that make facial recognition assure us this bias is a bug, not a feature, and that they can fix it. And they might be right. Face scanning algorithms for many purposes will improve over time. But facial recognition becoming more accurate doesn't make it less of a threat to human rights. This technology is dangerous when it's broken, but at a mass scale, it's even more dangerous when it works. And it will still disproportionately harm our society's most vulnerable members.

Persistent monitoring and policing of our behavior breeds conformity, benefits tyrants, and enriches elites.

We need spaces that are free from government and societal intrusion in order to advance as a civilization. If technology makes it so that laws can be enforced 100 percent of the time, there is no room to test whether those laws are just. If the U.S. government had ubiquitous facial recognition surveillance 50 years ago when homosexuality was still criminalized, would the LGBTQ rights movement ever have formed? In a world where private spaces don't exist, would people have felt safe enough to leave the closet and gather, build community, and form a movement? Freedom from surveillance is necessary for deviation from social norms as well as to dissent from authority, without which societal progress halts.

Persistent monitoring and policing of our behavior breeds conformity, benefits tyrants, and enriches elites. Drawing a line in the sand around tech-enhanced surveillance is the fundamental fight of this generation. Lining up to get our faces scanned to participate in society doesn't just threaten our privacy, it threatens our humanity, and our ability to be ourselves.

[Editor's Note: Read the opposite perspective here.]

Evan Greer
Evan Greer is a transgender activist, musician, and parent based in Boston. She's the deputy director of Fight for the Future, the digital rights group known for organizing massive online protests against SOPA, for net neutrality, and opposing government surveillance. Evan writes regularly for outlets like the Washington Post, The Guardian, Buzzfeed News, and Time. Follow her on twitter @evan_greer.
Get our top stories twice a month
Follow us on

Jackie Costanzo and her 93-year-old mom, Louise, are happy to have an extra way to stay connected with the camera, which is normally placed on a television stand facing her mom's bed.

(Courtesy Jackie Costanzo)


After Jackie Costanzo's mother broke her right hip in a fall, she needed more hands-on care in her assisted-living apartment near Sacramento, California. A social worker from her health plan suggested installing a video camera to help ensure those services were provided.

Without the camera, Costanzo wouldn't have a way to confirm that caregivers had followed through with serving meals, changing clothes, and fulfilling other care needs.

When Costanzo placed the device in May 2018, she informed the administrator and staff, and at first, there were no objections. The facility posted a sign on the apartment's front door, alerting anyone who entered of recording in progress.

But this past spring, a new management company came across the sign and threatened to issue a 30-day eviction notice to her 93-year-old mother, Louise Munch, who has dementia, for violating a policy that prohibits cameras in residents' rooms. With encouragement from California Advocates for Nursing Home Reform, Costanzo researched the state's regulations but couldn't find anything to support or deny camera use. She refused to remove the recording device and prevailed.

"In essence, my mom was 'grandfathered in' because she moved in under a management company that did not specify that residents could not have cameras," says Costanzo, 73, a retired elementary schoolteacher who lives a three-hour drive away, in Silicon Valley, and visits one day every two weeks. Without the camera, Costanzo, who is her mother's only surviving child, wouldn't have a way to confirm that caregivers had followed through with serving meals, changing clothes, and fulfilling other care needs.

As technological innovations enable next of kin to remain apprised of the elderly's daily care in long-term care facilities, surveillance cameras bring legal and privacy issues to the forefront of a complex ethical debate. Families place them overtly or covertly—disguised in a makeshift clock radio, for instance—when they suspect or fear abuse or neglect, so they can maintain a watchful eye, perhaps deterring egregious behavior. But the cameras also capture intimate caregiving tasks, such as bathing and toileting, as well as dressing and undressing, which may undermine the dignity of residents.

So far, laws or guidelines in eight states—Illinois, Maryland, New Mexico, Oklahoma, Texas, Utah, Virginia, and Washington—have granted families the rights to install cameras in a resident's room. In addition, about 15 other states have proposed legislation. Some states, such as Pennsylvania, have put forth regulatory compliance guidance, according to a column published in the July/August 2018 issue of Annals of Long-Term Care.

The increasing prevalence of this legislation has placed it on the radar of long-term care providers. It also suggests a trend to clarify responsible camera use in monitoring services while respecting privacy, says Victor Lane Rose, the column's editor and director of aging services at ECRI Institute, a health care nonprofit near Philadelphia, Pennsylvania.

In most cases, a resident's family installs a camera or instigates a request in hopes of sparing their loved one from the harms of abuse, says James Wright, a family physician who serves as the ethics committee's vice chair of the Society for Post-Acute and Long-Term Care Medicine in Columbia, Maryland. A camera also allows the family to check in on the resident from afar and remain on alert for a potential fall or agitated state, he says.

"It's rare that a facility will have 24-hour presence in a patient's room. You won't have a nurse in there all the time," says Wright, who is also medical director of two long-term care centers and one assisted-living facility around Richmond, Virginia. Particularly "with dementia, the family often wonders" if their loved one is safe.

While offering families peace of mind, he notes that video cameras can also help exonerate caregivers accused of abuse or theft. Hearing aids, which typically cost between $2,000 and $3,000 each, often go missing. By reviewing a video together, families and administrators may find clues to a device's disappearance. Conversely, Wright empathizes with the main counterargument against camera use, which is the belief that "invasion of privacy is also invasion of human dignity."

In respecting modesty, ethical questions abound over whether a camera should be turned off when a patient is in the midst of receiving personal care, such as dressing and undressing or using bedpans. Other ethical issues revolve around who may access the recordings, says Lori Smetanka, executive director of the National Consumer Voice for Quality Long-Term Care in Washington, D.C.

Video cameras, she contends, are only one tool in shielding residents from abuse. They are "not substitutes for personal involvement," she says. "People need to be very vigilant visiting their family members, and facilities have a responsibility to ensure that residents are free of abuse."

Lack of accountability perpetuates abuse in long-term care settings and stems in large part from systemic underfunding.

Educating employees in abuse prevention becomes paramount, and families should ask about staff training before placing their loved one in a long-term care facility, Smetanka says. Prior to installing a camera, she recommends consulting an attorney who is familiar with this issue.

But thoughts of a camera often don't occur to families until an adverse event affects their loved one, says Toby Edelman, a senior policy attorney at the Center for Medicare Advocacy, a nonprofit organization with headquarters in Washington, D.C., and Connecticut.

"These cameras can show exactly what's going on," she explains, noting that prosecutors have used the recordings in litigation. "When residents have injuries of unknown origin" and they can't verbalize what happened to them, "the cameras may document that yes, the resident was actually hit by somebody."

With a resident's safety and security being "the most important consideration," the American Health Care Association in Washington, D.C., which represents long-term and post-acute care providers, supports allowing states, clinicians, and patients to decide about camera use on a local level, says David Gifford, senior vice president of quality and regulatory affairs and chief medical officer.

"We've seen some success with tools such as permissive legislation, where residents and their loved ones have the ability to determine whether a camera is right for them while working with the center openly and ensuring the confidentiality of other residents," says Gifford, who practiced as a geriatrician. "It is important to note, however, that surveillance cameras are still only one element of the quality matrix. We can never hope to truly improve quality care by catching bad actors after the fact."

Lack of accountability perpetuates abuse in long-term care settings and stems in large part from systemic underfunding. Low wages and morale are tied to high turnover, and cameras don't address this overarching problem, says Clara Berridge, an assistant professor of social work at the University of Washington in Seattle, who has co-authored articles on surveillance devices in elder care.

Employees often don't perceive a nursing assistant position as a long-term career trajectory and may not feel vested in the workplace. Training in the recognition and reporting of abuse becomes ineffective when workers quit shortly thereafter. Many must juggle multiple jobs to make ends meet. Staffing shortages are endemic, leading to inadequate oversight of residents and voicing of abuse complaints, she says.

In Berridge's assessment, cameras may do more harm than good. Respondents to a survey she conducted of nursing homes and assisted-living facilities in the United States found that recording devices tend to fuel workers' anxiety amid a culture that further demoralizes and dehumanizes the care they provide.

Consent becomes particularly thorny in shared rooms, which are more common than not in nursing homes. States that permit in-room cameras mandate that roommates or their legal representative be made aware. Even if the camera is directed away from their bed, it will still capture conversations as well as movements that enter its scope. "Surveillance isn't the best way to protect adults in need of support," Berridge says. "Public investment in quality care is."

"The camera is invaluable. But there's no law that says you can have it automatically, so that's wrong."

In the one-bedroom assisted-living apartment where Costanzo's mother lives alone, consent from another resident wasn't needed. Without a roommate, the camera is much less intrusive, although Costanzo wishes she had put one in the living room, not just the bedroom, for more security.

Her safety concerns escalated when she read about a Texas serial killer who smothered victims after gaining access to senior care facilities by "masquerading as a maintenance man." She points to such horrifying incidents, although exceedingly rare, as further justification for permitting cameras to help guard the vulnerable against abuse in long-term care settings. And she hopes to advocate for an applicable law in California.

"The camera is invaluable," says Costanzo, who pays for monthly Wi-Fi service so she can see and interact with her mother, who turns 94 in October, any time of day or night. "But there's no law that says you can have it automatically, so that's wrong."

Susan Kreimer
Susan Kreimer is a New York-based freelance journalist who has followed the landscape of health care since the late 1990s, initially as a staff reporter for major daily newspapers. She writes about breakthrough studies, personal health, and the business of clinical practice. Raised in the Chicago area, she holds a B.A. in Journalism/Mass Communication and French from the University of Iowa and an M.S. from the Columbia University Graduate School of Journalism.

A rendering of what a sky advertisement would look like.

(StartRocket)


Imagine enjoying a romantic night stargazing, cozying up for the evening – and you catch a perfectly timed ad for Outback Steakhouse.

Countries have sovereignty over their airspace, but the night sky itself is pretty much an open field.

That's the vision of StartRocket, a Russian startup planning to put well-lit advertisements into outer space. According to a recent interview, StartRocket says its first client is PepsiCo.

The Lowdown

Launching at twilight during the early morning or early evening, the ads will be on cubesats – 10 cm square metallic boxes traditionally used in space. The attached Mylar sails will reflect light from the rising or setting sun, making the ad appear like an "orbital billboard."

The advertisements will need all the solar power they can get: According to a 2016 report, 80 percent of the world and 99 percent of America and Europe experience light pollution at night. Showing advertisements in, say, Wyoming will be much easier than attracting attention in Midtown Manhattan – and risks adding a considerable amount of light pollution to an already overburdened night sky.

Next Up

The StartRocket advertising program is set to begin in 2021. The most recent rate is $20,000 for eight hours of advertising space.

But first, StartRocket has to win over consumers, regulators and space activists.

"I don't see it taking off now," says TED Fellow and University of Texas, Austin Associate Professor Dr. Moriba Jah. Jah is the creator of Astriagraph, an interactive tool to help monitor space junk orbiting Earth. "In general, the space community is anathema to advertisements from orbit to people on the ground… The global astronomy community will be fighting it tooth and nail."

Jah notes SpaceX's launch of 60 satellites last month. "Astronomers were up in arms since they are so bright, you can see them with the naked eye." It got to the point where Elon Musk had to defend himself to the astronomy community on Twitter.

Open Questions

Startups come and go, especially those that are looking for funding. StartRocket is in both categories. Frankly, it's unclear if the ads will actually launch two years from now.

Space advertisements are more likely to be the future for less regulated and financially strapped areas.

The regulatory hurdles are just as unknown. According to Jah, countries have sovereignty over their airspace (think planes, balloons and drones), but the night sky itself is pretty much an open field. This doesn't remove the political ramifications, though, and any American-based launches would have to contend with the FCC, since it regulates advertisements, and the FAA, since it regulates flight.

Carbon credits-style redemptions may help balance out the potential environmental and political damage done by sky ads. It isn't a coincidence that space pioneers Musk, Jeff Bezos, and Richard Branson succeeded at other ventures first, giving them considerably deep pockets to survive red tape – something StartRocket's team doesn't have at the moment.

Space advertisements are more likely to be the future for less regulated, financially strapped areas. Depending on how ad companies negotiate with the local governments, it's easy to picture Kolkata with an "Enjoy Coke" advertisement blaring during a Ganges sunset.

"In rural places, it would be like having another moon," Jah says. "People would say the rich are now taking the sky away from us."

Damon Brown
Damon Brown co-founded the popular platonic connection app Cuddlr. Now he helps side hustlers, solopreneurs, and other non-traditional entrepreneurs bloom. He is author of the TED book "Our Virtual Shadow" and, most recently, the best-selling "The Bite-Sized Entrepreneur" series. Join his creative community at www.JoinDamon.me.

A patient with below-knee AMI amputation walks up the stairs.

(Photo credit: Matthew Orr, STAT News)


"Here's a question for you," I say to our dinner guests, dodging a knowing glance from my wife. "Imagine a future in which you could surgically replace your legs with robotic substitutes that had all the functionality and sensation of their biological counterparts. Let's say these new legs would allow you to run all day at 20 miles per hour without getting tired. Would you have the surgery?"

Why are we so married to the arbitrary distinction between rehabilitating and augmenting?

Like most people I pose this question to, our guests respond with some variation on the theme of "no way"; the idea of undergoing a surgical procedure with the sole purpose of augmenting performance beyond traditional human limits borders on the unthinkable.

"Would your answer change if you had arthritis in your knees?" This is where things get interesting. People think differently about intervention when injury or illness is involved. The idea of a major surgery becomes more tractable to us in the setting of rehabilitation.

Consider the simplistic example of human walking speed. The average human walks at a baseline three miles per hour. If someone is only able to walk at one mile per hour, we do everything we can to increase their walking ability. However, to take a person who is already able to walk at three miles per hour and surgically alter their body so that they can walk twice as fast seems, to us, unreasonable.

What fascinates me about this is that the three-mile-per-hour baseline is set by arbitrary limitations of the healthy human body. If we ignore this reference point altogether, and consider that each case simply offers an improvement in walking ability, the line between augmentation and rehabilitation all but disappears. Why, then, are we so married to this arbitrary distinction between rehabilitating and augmenting? What makes us hold so tightly to baseline human function?

Where We Stand Now

As the functionality of advanced prosthetic devices continues to increase at an astounding rate, questions like these are becoming more relevant. Experimental prostheses, intended for the rehabilitation of people with amputation, are now able to replicate the motions of biological limbs with high fidelity. Neural interfacing technologies enable a person with amputation to control these devices with their brain and nervous system. Before long, synthetic body parts will outperform biological ones.

Our approach allows people to not only control a prosthesis with their brain, but also to feel its movements as if it were their own limb.

Against this backdrop, my colleagues and I developed a methodology to improve the connection between the biological body and a synthetic limb. Our approach, known as the agonist-antagonist myoneural interface ("AMI" for short), enables us to reflect joint movement sensations from a prosthetic limb onto the human nervous system. In other words, the AMI allows people to not only control a prosthesis with their brain, but also to feel its movements as if it were their own limb. The AMI involves a reimagining of the amputation surgery, so that the resultant residual limb is better suited to interact with a neurally-controlled prosthesis. In addition to increasing functionality, the AMI was designed with the primary goal of enabling adoption of a prosthetic limb as part of a patient's physical identity (known as "embodiment").

Early results have been remarkable. Patients with below-knee AMI amputation are better able to control an experimental prosthetic leg, compared to people who had their legs amputated in the traditional way. In addition, the AMI patients show increased evidence of embodiment. They identify with the device, and describe feeling as though it is part of them, part of self.

Where We're Going

True embodiment of robotic devices has the potential to fundamentally alter humankind's relationship with the built world. Throughout history, humans have excelled as tool builders. We innovate in ways that allow us to design and augment the world around us. However, tools for augmentation are typically external to our body identity; there is a clean line drawn between smart phone and self. As we advance our ability to integrate synthetic systems with physical identity, humanity will have the capacity to sculpt that very identity, rather than just the world in which it exists.

For this potential to be realized, we will need to let go of our reservations about surgery for augmentation. In reality, this shift has already begun. Consider the approximately 17.5 million surgical and minimally invasive cosmetic procedures performed in the United States in 2017 alone. Many of these represent patients with no demonstrated medical need, who have opted to undergo a surgical procedure for the sole purpose of synthetically enhancing their body. The ethical basis for such a procedure is built on the individual perception that the benefits of that procedure outweigh its costs.

At present, it seems absurd that amputation would ever reach this point. However, as robotic technology improves and becomes more integrated with self, the balance of cost and benefit will shift, lending a new perspective on what now seems like an unfathomable decision to electively amputate a healthy limb. When this barrier is crossed, we will collide head-on with the question of whether it is acceptable for a person to "upgrade" such an essential part of their body.

At a societal level, the potential benefits of physical augmentation are far-reaching. The world of robotic limb augmentation will be a world of experienced surgeons whose hands are perfectly steady, firefighters whose legs allow them to kick through walls, and athletes who never again have to worry about injury. It will be a world in which a teenage boy and his grandmother embark together on a four-hour sprint through the woods, for the sheer joy of it. It will be a world in which the human experience is fundamentally enriched, because our bodies, which play such a defining role in that experience, are truly malleable.

This is not to say that such societal benefits stand without potential costs. One justifiable concern is the misuse of augmentative technologies. We are all quite familiar with the proverbial supervillain whose nervous system has been fused to that of an all-powerful robot.

The world of robotic limb augmentation will be a world of experienced surgeons whose hands are perfectly steady.

In reality, misuse is likely to be both subtler and more insidious than this. As with all new technology, careful legislation will be necessary to work against those who would hijack physical augmentations for violent or oppressive purposes. It will also be important to ensure broad access to these technologies, to protect against further socioeconomic stratification. This particular issue is helped by the tendency of the cost of a technology to scale inversely with market size. It is my hope that when robotic augmentations are as ubiquitous as cell phones, the technology will serve to equalize, rather than to stratify.

In our future bodies, when we as a society decide that the benefits of augmentation outweigh the costs, it will no longer matter whether the base materials that make us up are biological or synthetic. When our AMI patients are connected to their experimental prosthesis, it is irrelevant to them that the leg is made of metal and carbon fiber; to them, it is simply their leg. After our first patient wore the experimental prosthesis for the first time, he sent me an email that provides a look at the immense possibility the future holds:

What transpired is still slowly sinking in. I keep trying to describe the sensation to people. Then this morning my daughter asked me if I felt like a cyborg. The answer was, "No, I felt like I had a foot."

Tyler Clites
Tyler Clites builds human cyborgs. After graduating from Harvard in 2014 with a B.S. in Biomedical and Mechanical Engineering, Tyler earned his PhD in 2018 from the Harvard/MIT program in Health Sciences and Technology. As a postdoc in the Biomechatronics Group (Professor Hugh Herr, MIT Media Lab), his current research focuses on the development of novel techniques for limb amputation surgery, with the goal of improving the neural and mechanical interfaces between persons with amputation and their prosthetic devices. He is pursuing a career in academia, where he hopes to explore applications in which surgical and mechatronic design can be leveraged together in new bionic systems for physical rehabilitation and human augmentation.