alzheimers

Brain inflammation from Alzheimer's disease.

(NIH image gallery via Flickr)


Alzheimer's is a terrible disease that robs a person of their personality and memory before eventually leading to death. It's the sixth-largest killer in the U.S. and, currently, there are 5.8 million Americans living with the disease.

Wang's vaccine is a significant improvement over previous attempts because it can attack the Alzheimer's protein without creating any adverse side effects.

It devastates people and families and it's estimated that Alzheimer's and other forms of dementia will cost the U.S. $290 billion dollars this year alone. It's estimated that it will become a trillion-dollar-a-year disease by 2050.

There have been over 200 unsuccessful attempts to find a cure for the disease and the clinical trial termination rate is 98 percent.

Alzheimer's is caused by plaque deposits that develop in brain tissue that become toxic to brain cells. One of the major hurdles to finding a cure for the disease is that it's impossible to clear out the deposits from the tissue. So scientists have turned their attention to early detection and prevention.

One very encouraging development has come out of the work done by Dr. Chang Yi Wang, PhD. Wang is a prolific bio-inventor; one of her biggest successes is developing a foot-and-mouth vaccine for pigs that has been administered more than three billion times.

Mei Mei Hu

Brainstorm Health / Flickr.

In January, United Neuroscience, a biotech company founded by Yi, her daughter Mei Mei Hu, and son-in-law, Louis Reese, announced the first results from a phase IIa clinical trial on UB-311, an Alzheimer's vaccine.

The vaccine has synthetic versions of amino acid chains that trigger antibodies to attack Alzheimer's protein the blood. Wang's vaccine is a significant improvement over previous attempts because it can attack the Alzheimer's protein without creating any adverse side effects.

"We were able to generate some antibodies in all patients, which is unusual for vaccines," Yi told Wired. "We're talking about almost a 100 percent response rate. So far, we have seen an improvement in three out of three measurements of cognitive performance for patients with mild Alzheimer's disease."

The researchers also claim it can delay the onset of the disease by five years. While this would be a godsend for people with the disease and their families, according to Elle, it could also save Medicare and Medicaid more than $220 billion.

"You'd want to see larger numbers, but this looks like a beneficial treatment," James Brown, director of the Aston University Research Centre for Healthy Ageing, told Wired. "This looks like a silver bullet that can arrest or improve symptoms and, if it passes the next phase, it could be the best chance we've got."

"A word of caution is that it's a small study," says Drew Holzapfel, acting president of the nonprofit UsAgainstAlzheimer's, said according to Elle. "But the initial data is compelling."

The company is now working on its next clinical trial of the vaccine and while hopes are high, so is the pressure. The company has already invested $100 million developing its vaccine platform. According to Reese, the company's ultimate goal is to create a host of vaccines that will be administered to protect people from chronic illness.

"We have a 50-year vision -- to immuno-sculpt people against chronic illness and chronic aging with vaccines as prolific as vaccines for infectious diseases," he told Elle.

[Editor's Note: This article was originally published by Upworthy here and has been republished with permission.]

Leo Shvedsky
Leo Shvedsky is a writer for GOOD and Upworthy.

Phil Gutis, an Alzheimer's patient who participated in a failed clinical trial, poses with his dog Abe.

(Courtesy Gutis)


Phil Gutis never had a stellar memory, but when he reached his early 50s, it became a problem he could no longer ignore. He had trouble calculating how much to tip after a meal, finding things he had just put on his desk, and understanding simple driving directions.

From 1998-2017, industry sources reported 146 failed attempts at developing Alzheimer's drugs.

So three years ago, at age 54, he answered an ad for a drug trial seeking people experiencing memory issues. He scored so low in the memory testing he was told something was wrong. M.R.I.s and PET scans confirmed that he had early-onset Alzheimer's disease.

Gutis, who is a former New York Times reporter and American Civil Liberties Union spokesman, felt fortunate to get into an advanced clinical trial of a new treatment for Alzheimer's disease. The drug, called aducanumab, had shown promising results in earlier studies.

Four years of data had found that the drug effectively reduced the burden of protein fragments called beta-amyloids, which destroy connections between nerve cells. Amyloid plaques are found in the brains of patients with Alzheimer's disease and are associated with impairments in thinking and memory.

Gutis eagerly participated in the clinical trial and received 35 monthly infusions. "For the first 20 infusions, I did not know whether I was receiving the drug or the placebo," he says. "During the last 15 months, I received aducanumab. But it really didn't matter if I was receiving the drug or the placebo because on March 21, the trial was stopped because [the drug company] Biogen found that the treatments were ineffective."

The news was devastating to the trial participants, but also to the Alzheimer's research community. Earlier this year, another pharmaceutical company, Roche, announced it was discontinuing two of its Alzheimer's clinical trials. From 1998-2017, industry sources reported 146 failed attempts at developing Alzheimer's drugs. There are five prescription drugs approved to treat its symptoms, but a cure remains elusive. The latest failures have left researchers scratching their heads about how to approach attacking the disease.

The failure of aducanumab was also another setback for the estimated 5.8 million people who have Alzheimer's in the United States. Of these, around 5.6 million are older than 65 and 200,000 suffer from the younger-onset form, including Gutis.

Gutis is understandably distraught about the cancellation of the trial. "I really had hopes it would work. So did all the patients."

While drug companies have failed so far, another group is stepping up to expedite the development of a cure: venture philanthropists.

For now, he is exercising every day to keep his blood flowing, which is supposed to delay the progression of the disease, and trying to eat a low-fat diet. "But I know that none of it will make a difference. Alzheimer's is a progressive disease. There are no treatments to delay it, let alone cure it."

But while drug companies have failed so far, another group is stepping up to expedite the development of a cure: venture philanthropists. These are successful titans of industry and dedicated foundations who are donating large sums of money to fill a much-needed void – funding research to look for new biomarkers.

Biomarkers are neurochemical indicators that can be used to detect the presence of a disease and objectively measure its progression. There are currently no validated biomarkers for Alzheimer's, but researchers are actively studying promising candidates. The hope is that they will find a reliable way to identify the disease even before the symptoms of mental decline show up, so that treatments can be directed at a very early stage.

Howard Fillit, Founding Executive Director and Chief Science Officer of the Alzheimer's Drug Discovery Foundation, says, "We need novel biomarkers to diagnose Alzheimer's disease and related dementias. But pharmaceutical companies don't put money into biomarkers research."

One of the venture philanthropists who has recently stepped up to the task is Bill Gates. In January 2018, he announced his father had Alzheimer's disease in an interview on the Today Show with Maria Shriver, whose father Sargent Shriver, died of Alzheimer's disease in 2011. Gates told Ms. Shriver that he had invested $100 million into Alzheimer's research, with $50 million of his donation going to Dementia Discovery Fund, which looks for new cures and treatments.

That August, Gates joined other investors in a new fund called Diagnostics Accelerator. The project aims to supports researchers looking to speed up new ideas for earlier and better diagnosis of the disease.

Gates and other donors committed more than $35 million to help launch it, and this April, Jeff and Mackenzie Bezos joined the coalition, bringing the current program funding to nearly $50 million.

"It makes sense that a challenge this significant would draw the attention of some of the world's leading thinkers."

None of these funders stand to make a profit on their donation, unlike traditional research investments by drug companies. The standard alternatives to such funding have upsides -- and downsides.

As Bill Gates wrote on his blog, "Investments from governments or charitable organizations are fantastic at generating new ideas and cutting-edge research -- but they're not always great at creating usable products, since no one stands to make a profit at the end of the day.

"Venture capital, on the other end of the spectrum, is more likely to develop a test that will reach patients, but its financial model favors projects that will earn big returns for investors. Venture philanthropy splits the difference. It incentivizes a bold, risk-taking approach to research with an end goal of a real product for real patients. If any of the projects backed by Diagnostics Accelerator succeed, our share of the financial windfall goes right back into the fund."

Gutis said he is thankful for any attention given to finding a cure for Alzheimer's.

"Most doctors and scientists will tell you that we're still in the dark ages when it comes to fully understanding how the brain works, let alone figuring out the cause or treatment for Alzheimer's.

"It makes sense that a challenge this significant would draw the attention of some of the world's leading thinkers. I only hope they can be more successful with their entrepreneurial approach to finding a cure than the drug companies have been with their more traditional paths."

David Levine
David Levine is co-chairman of Science Writers in New York (SWINY) and is a member of the National Association of Science Writers (NASW) and the Association of Healthcare Journalists. He was director of media relations for the American Cancer Society and senior director of communications at the NYC Health and Hospitals Corp. He has written for Scientific American, the Los Angeles Times, The New York Times, Nature Medicine, the Smithsonian, More and Good Housekeeping, and was a contributing editor at Physician's Weekly for 10 years. He has a BA and MA from Johns Hopkins University.
Get our top stories twice a month
Follow us on

The author as a child with her father in 1981, before he was diagnosed with Alzheimer's.

(Courtesy of Jay Newton-Small)


Editor's Note: A team of researchers in Italy recently used artificial intelligence and machine learning to diagnose Alzheimer's disease on a brain scan an entire decade before symptoms show up in the patient. While some people argue that early detection is critical, others believe the knowledge would do more harm than good. LeapsMag invited contributors with opposite opinions to share their perspectives.

Alzheimer's doesn't run in my family. When my father was diagnosed at the age of 58, we looked at his familial history. Both his parents lived into their late 80's. All of their surviving siblings were similarly long-lived and none had had Alzheimer's or any related dementias. My dad had spent 20 years working for the United Nations in the 60's and 70's in Africa. He was convinced that the Alzheimer's had come from his time spent in dodgy mines where he was exposed without the proper protections to all kinds of chemical processes.

Maybe that was true. Maybe it wasn't. The theory that metals, particularly aluminum, is an environmental factor leading to Alzheimer's has been around for a while. It's mostly been debunked, but clearly something is causing this epidemic as the vast majority of the cases in the world today are age-related. But no one knows what the trigger is, nor are we close to knowing.

If my father had had the Alzheimer's gene, I would go get myself checked for it. If some new MRI were commercially available to scan my brain and let me know if I was developing Alzheimer's, I would also take that test. There are four reasons why.

First, studies have shown that lifestyle has a major impact on the disease. I already run three miles a day. I eat relatively healthily. But like anyone, I don't live strictly on boiled chicken and broccoli. And I definitely enjoy a glass of wine or two. If I knew I had a propensity for the disease, or was developing it, I would be more diligent. I would eat my broccoli and cut out my wine. Life would be less fun, but I'd get more life and that's what's important.

The last picture taken of the author with her father before his death, in 2015.

Secondly, I would also have time to create an end-of-life plan the way my father did not. He told me repeatedly early on in his diagnosis that he did not want to live when he no longer knew me, when he became a burden, when he couldn't feed or bathe himself. I did my best in his final years to help him die quicker: I know that was what he wanted. But, given U.S. laws, all that meant was taking him off his heart and stroke medications and letting him eat anything he wanted, no matter how unhealthy. Knowing what's to come, having seen him go through it, I might consider moving to Belgium, which has begun to allow assisted suicide of those living with Alzheimer's and dementia if they can clearly state their intentions early on in the disease when they still have clarity of mind.

Next, I could help. Right now, there are dozens of Alzheimer's and dementia studies in the works. They are short thousands of willing test subjects. One of the top barriers to learning what's triggering the disease, and finding a cure, is populating these studies. So, knowing would make me a stronger candidate and would potentially help others down the road.

Finally, it would change my priorities. My father died the longest death possible: he succumbed last year more than 15 years after his diagnosis. My mother died the quickest possible way: she had a stress-related brain aneurysm 10 years after my father's diagnosis. Caring for him was too much for her and aneurysms ran in her family; her mother died of one as well. I already get a scan once every five years to see if I'm developing a brain aneurysm. If I am, odds are only 66% that they can operate on it—some aneurysms develop much too deep in the brain to operate, like my mother's.

Would she have wanted to know? Even though the aneurysm in her case was inoperable? I'm not sure. But I imagine if she had known, she would've lived her final years differently. She might have taken that trip to Alaska that she debated but thought was too expensive. She might have gotten organized earlier to make out a will so I wasn't left with chaos in the wake of her death; we'd planned for my father's death, knowing he was ill, but not my mother's. And she might have finally gotten around to dictating her story to me, as she'd always promised me she would when she found the time.

Telling my father's story at the end of his life helped his care.

With my startup MemoryWell, I spend my life now collecting senior stories before they are lost, in part because telling my father's story at the end of his life helped his care. But it's also in part for the story I lost with my mother.

If I knew that my time was limited, I'd not worry so much about saving for retirement. I'd make progress on my bucket list: hike Machu Picchu, scuba dive the Maldives, or raft the Grand Canyon. I'd tell my loved ones as much as I can in my time remaining how much they mean to me. And I would spend more time writing my own story to pass it down—finally finishing the book I've been working on. Maybe it's the writer in me, or maybe it's that I don't have kids of my own yet to carry on a legacy, but I'd want my story to be known, to have others learn from my experiences. And that's the biggest gift knowing would give me.

Editor's Note: Consider the other side of the argument here.

Jay Newton-Small
Jay Newton-Small is founder and CEO of MemoryWell, a network of more than 350 journalists writing the life stories of seniors with the aim of improving their care and saving their memories. She’s a former longtime TIME Magazine correspondent, where she remains a contributor.

Scientist in laboratory

(© kkolosov / Fotolia)


"The graveyard of hope." That's what experts call the quest for effective Alzheimer's treatments, a two-decade effort that has been marked by one costly and high-profile failure after another. Nearly all of the drugs tested target one of the key hallmarks of Alzheimer's disease: amyloid plaques, the barnacle-like proteins long considered the culprits behind the memory-robbing ravages of the disease. Yet all the anti-amyloid drugs have flopped miserably, prompting some scientists to believe we've fingered the wrong villain.

"We're flogging a dead horse," says Peter Davies, PhD, an Alzheimer's researcher at the Feinstein Institute for Medical Research in New York. "The fact that no one's gotten better suggests that you have the wrong mechanism."

If the naysayers are right, how could a scientific juggernaut of this magnitude—involving hundreds of scientists in academia and industry at a cost of tens of billions of dollars--be so far off the mark? There are no easy answers, but some experts believe this calls into question how research is conducted and blame part of the failure on the insular culture of the scientific aristocracy at leading academic institutions.

"The field began to be dominated by narrow views."

"The field began to be dominated by narrow views," says George Perry, PhD, an Alzheimer's researcher and dean of the College of Sciences at the University of Texas in San Antonio. "The people pushing this were incredibly articulate, powerful and smart. They'd go to scientific meetings and all hang around with each other and they'd self-reinforce."

In fairness, there was solid science driving this. Post-mortem analyses of Alzheimer's patients found their brains were riddled with amyloid plaques. People with a strong family history of Alzheimer's had genetic mutations in the genes that encode for the production of amyloids. And in animal studies, scientists found that if amyloids were inserted into the brains of transgenic mice, they exhibited signs of memory loss. Remove the amyloids and they suddenly got better. This body of research helped launch the Amyloid Cascade Hypothesis of the disease in 1992—which has driven research ever since.

Scientists believed that the increase in the production of these renegade proteins, which form sticky plaques and collect outside of the nerve cells in the brain, triggers a series of events that interfere with the signaling system between synapses. This seems to prevent cells from relaying messages or talking to each other, causing memory loss, confusion and increasing difficulties doing the normal tasks of life. The path forward seemed clear: stop amyloid production and prevent disease progression. "We were going after the obvious abnormality," says Dr. David Knopman, a neurologist and Alzheimer's researcher at the Mayo Clinic in Rochester, Minnesota.

"Why wouldn't you do that?" Why ideed.

In hindsight, though, there was no real smoking gun—no one ever showed precisely how the production of amyloids instigates the destruction of vital brain circuits.

"Amyloids are clearly important," says Perry, "but they have not proven to be necessary and sufficient for the development of this disease."

Ironically, there have been hints all along that amyloids may not be toxic bad boys.

A handful of studies revealed that amyloid proteins are produced in healthy brains to protect synapses. Research on animal models that mimic diseases suggest that certain forms of amyloids can ease damage from strokes, traumatic brain injuries and even heart attacks. In a 2013 study, to cite just one example, a Stanford University team injected synthetic amyloids into paralyzed mice with an inflammatory disorder similar to multiple sclerosis. Instead of worsening their symptoms—which is what the researchers expected to happen--the mice could suddenly walk again. Remove the amyloids, and they became paralyzed once more.

Still other studies suggest amyloids may actually function as molecular guardians dispatched to silence inflammation and mop up errant cells after an injury as part of the body's waste management system. "The presence of amyloids is a protective response to something going wrong, a threat," says Dr. Dale Bredesen, a UCLA neurologist. "But the problem arises when the threats are chronic, multiple, unrelenting and intense. The defenses the brain mounts are also intense and these protective mechanisms cross the line into causing harm, and killing the very synapses and brain cells the amyloid was called up to protect."

So how did research get derailed?

In a way, we're victims of our own success, critics say.

Early medical triumphs in the heady post-World War II era, like the polio vaccine that eradicated the crippling childhood killer, or antibiotics, reinforced the magic bullet idea of curing disease--find a target and then hit it relentlessly. That's why when scientists made the link between amyloids and disease progression, Big Pharma jumped on the bandwagon in hopes of inventing a trillion-dollar drug. This approach is fine when you have an acute illness, like an infectious disease that's caused by one agent, but not for something as complicated as Alzheimer's.

The other piece of the problem is the dwindling federal dollars for basic research. Maverick scientists find it difficult to secure funding, which means that other possible targets or approaches remained relatively unexplored—and drug companies are understandably reluctant to sponsor fishing expeditions with little guarantee of a payoff. "Very influential people were driving this hypothesis," says Davies, and with careers on the line, "there was not enough objectivity or skepticism about that hypothesis."

Still, no one is disputing the importance of anti-amyloid drugs—and ongoing clinical trials, like the DIAN and A4 studies, are intervening earlier in patients who are at a high risk of developing Alzheimer's, but before they're symptomatic. "The only way to know if this is really a dead end is if you take it as far as it can go," says Knopman. "I believe the A4 study is the proper way to test the amyloid hypothesis."

But according to some experts, the latest thinking is that Alzheimer's is triggered by a range of factors, including genetics, poor diet, stress and lack of exercise.

"Alzheimer's is like other chronic age-related diseases and is multi-factorial," says Perry. "Modulating amyloids may have value but other avenues need to be explored."

Linda Marsa
Linda Marsa is a contributing editor at Discover, a former Los Angeles Times reporter and author of Fevered: Why a Hotter Planet Will Harm Our Health and How We Can Save Ourselves (Rodale, 2013), which the New York Times called “gripping to read.” Her work has been anthologized in The Best American Science Writing, and she has written for numerous publications, including Newsweek, U.S. News & World Report, Nautilus, Men’s Journal, Playboy, Pacific Standard and Aeon.

Human cells under a microscope

(© klickit24 / Fotolia)


BIG QUESTION OF THE MONTH: Should we use CRISPR, the new technique that enables precise DNA editing, to change the genes of human embryos to eradicate disease--or even to enhance desirable traits? LeapsMag invited three leading experts to weigh in.

Now that researchers around the world have begun to edit the genes of human embryos with CRISPR, the ethical debate has become more timely than ever: Should this kind of research be on the table or categorically ruled out?

All of us need gene editing to be pursued, and if possible, made safe enough to use in humans. Not only to pave the way for effective procedures on adults, but more importantly, to keep open the possibility of using gene editing to protect embryos from susceptibility to major diseases and to prevent other debilitating genetic conditions from being passed on through them to future generations.

Objections to gene editing in embryos rest on three fallacious arguments:

  1. Gene editing is wrong because it affects future generations, the argument being that the human germline is sacred and inviolable.
  2. It constitutes an unknown and therefore unacceptable risk to future generations.
  3. The inability to obtain the consent of those future generations means we must not use gene editing.

We should be clear that there is no precautionary approach; just as justice delayed is justice denied, so therapy delayed is therapy denied.

Regarding the first point, many objections to germline interventions emphasize that such interventions are objectionable in that they affect "generations down the line". But this is true, not only of all assisted reproductive technologies, but of all reproduction of any kind.

Sexual reproduction would never have been licensed by regulators

As for the second point, every year an estimated 7.9 million children - 6% of total births worldwide - are born with a serious birth defect of genetic or partially genetic origin. Had sexual reproduction been invented by scientists rather than resulting from our evolved biology, it would never have been licensed by regulators - far too inefficient and dangerous!

If the appropriate benchmark for permissible risk of harm to future generations is sexual reproduction, other germline-changing techniques would need to demonstrate severe foreseeable dangers to fail.

Raising the third point in his statement on gene-editing in human embryos, Francis S. Collins, director of the National Institutes of Health, stated: "The strong arguments against engaging in this activity remain … These include the serious and unquantifiable safety issues, ethical issues presented by altering the germline in a way that affects the next generation without their consent."

"Serious and unquantifiable" safety issues feature in all new technologies but consent is simply irrelevant for the simple and sufficient reason that there are no relevant people in existence capable of either giving or withholding consent to these sorts of changes in their own germline.

We all have to make decisions for future people without considering their inevitably absent consent. All would-be/might-be parents make numerous decisions about issues that might affect their future children. They do this all the time without thinking about consent of the children.

George Bernard Shaw and Isadora Duncan were possibly apocryphal exceptions. She, apparently, said to him something like: "Why don't we have a child? With my looks and your brains it cannot fail," and received Shaw's more rational assessment: "Yes, but what if it has my looks and your brains?"

If there is a discernible duty here, it is surely to try to create the best possible child, a child who will be the healthiest, most intelligent and most resilient to disease reasonably possible given the parents' other priorities. This is why we educate and vaccinate our children and give them a good diet if we can. That is what it is to act for the best, all things considered. This we have moral reasons to do; but they are not necessarily overriding reasons.

"There is no morally significant line between therapy and enhancement."

There is no morally significant line that can be drawn between therapy and enhancement. As I write these words in my London apartment, I am bathed in synthetic sunshine, one of the oldest and most amazing enhancement technologies. Before its invention, our ancestors had to rest or hide in the dark. With the advent of synthetic sunshine--firelight, candlelight, lamplight and electric light--we could work and play as long as we wished.Steven Hawking initially predicted that we might have about 7.6 billion years to go before the Earth gives up on us; he recently revised his position in relation to the Earth's continuing habitability as opposed to its physical survival: "We must also continue to go into space for the future of humanity," he said recently. "I don't think we will survive another thousand years without escaping beyond our fragile planet."

We will at some point have to escape both beyond our fragile planet and our fragile nature. One way to enhance our capacity to do both these things is by improving on human nature where we can do so in ways that are "safe enough." What we all have an inescapable moral duty to do is to continue with scientific investigation of gene editing techniques to the point at which we can make a rational choice. We must certainly not stop now.

At the end of a 2015 summit where I spoke about this issue, the renowned Harvard geneticist George Church noted that gene editing "opens up the possibility of not just transplantation from pigs to humans but the whole idea that a pig organ is perfectible…Gene editing could ensure the organs are very clean, available on demand and healthy, so they could be superior to human donor organs."

"We know for sure that in the future there will be no more human beings and no more planet Earth."

We know for sure that in the future there will be no more human beings and no more planet Earth. Either we will have been wiped out by our own foolishness or by brute forces of nature, or we will have further evolved by a process more rational and much quicker than Darwinian evolution--a process I described in my book Enhancing Evolution. Even more certain is that there will be no more planet Earth. Our sun will die, and with it, all possibility of life on this planet.As I say in my recent book How to Be Good:

By the time this happens, we may hope that our better evolved successors will have developed the science and the technology needed to survive and to enable us (them) to find and colonize another planet or perhaps even to build another planet; and in the meanwhile, to cope better with the problems presented by living on this planet.

Editor's Note: Check out the viewpoints expressing condemnation and mild curiosity.

John Harris
John Harris, FMedSci., Member, Academia Europaea., FRSA., B.A., D.Phil., Hon. D.Litt., is Professor Emeritus in Bioethics, University of Manchester and Visiting Professor at Kings College London. His many books include: On Cloning, Routledge, London, 2004; Enhancing Evolution, Princeton University Press, 2007; and How to be Good, Oxford University Press, 2016.