Mathematical Modelling of COVID-19: CIHR awards $666,667 to the Fields Institute in partnership with AARMS, CRM and PIMS @ Removing macrophages shows success against ovarian cancer in mice – By removing two kinds of macrophages in mice, researchers showed that ovarian tumours in mice were reduced in size and stopped spreading @ Engineers put tens of thousands of artificial brain synapses on a single chip – The design could advance the development of small, portable AI devices. Jennifer Chu | MIT News Office June 8, 2020 @ Gene Therapy Comes of Age: Promising Treatment Option for Human Diseases @ MAY 26, 2020 – Physicist creates fifth state of matter from the living room by Neil Vowles, University of Sussex & OTHER VERY IMPORTANT INFORMATION OF THE WORLD, LIKE LINKS AND IMAGES

Do the downloads!! Share!! The diffusion of very important information and knowledge is essential for the world progress always!! Thanks!!

  • – > Mestrado – Dissertation – Tabelas, Figuras e Gráficos – Tables, Figures and Graphics´´My´´ Dissertation @ #Innovation #energy #life #health #Countries #Time #Researches #Reference #Graphics #Ages #Age #Mice #People #Person #Mouse #Genetics #PersonalizedMedicine #Diagnosis #Prognosis #Treatment #Disease #UnknownDiseases #Future #VeryEfficientDrugs #VeryEfficientVaccines #VeryEfficientTherapeuticalSubstances #Tests #Laboratories #Investments #Details #HumanLongevity #DNA #Cell #Memory #Physiology #Nanomedicine #Nanotechnology #Biochemistry #NewMedicalDevices #GeneticEngineering #Internet #History #Science #World

Pathol Res Pract. 2012 Jul 15;208(7):377-81. doi: 10.1016/j.prp.2012.04.006. Epub 2012 Jun 8.

The influence of physical activity in the progression of experimental lung cancer in mice

Renato Batista Paceli 1Rodrigo Nunes CalCarlos Henrique Ferreira dos SantosJosé Antonio CordeiroCassiano Merussi NeivaKazuo Kawano NagaminePatrícia Maluf Cury


Impact_Fator-wise_Top100Science_Journals

GRUPO_AF1GROUP AFA1 – Aerobic Physical Activity – Atividade Física Aeróbia – ´´My´´ Dissertation – Faculty of Medicine of Sao Jose do Rio Preto

GRUPO AFAN 1GROUP AFAN1 – Anaerobic Physical ActivityAtividade Física Anaeróbia – ´´My´´ Dissertation – Faculty of Medicine of Sao Jose do Rio Preto

GRUPO_AF2GROUP AFA2 – Aerobic Physical ActivityAtividade Física Aeróbia – ´´My´´ Dissertation – Faculty of Medicine of Sao Jose do Rio Preto

GRUPO AFAN 2GROUP AFAN 2 – Anaerobic Physical ActivityAtividade Física Anaeróbia´´My´´ Dissertation – Faculty of Medicine of Sao Jose do Rio Preto

Slides – mestrado´´My´´ Dissertation – Faculty of Medicine of Sao Jose do Rio Preto

CARCINÓGENO DMBA EM MODELOS EXPERIMENTAIS

DMBA CARCINOGEN IN EXPERIMENTAL MODELS

Avaliação da influência da atividade física aeróbia e anaeróbia na progressão do câncer de pulmão experimental – Summary – Resumo´´My´´ Dissertation Faculty of Medicine of Sao Jose do Rio Preto

https://pubmed.ncbi.nlm.nih.gov/22683274/

Abstract

Lung cancer is one of the most incident neoplasms in the world, representing the main cause of mortality for cancer. Many epidemiologic studies have suggested that physical activity may reduce the risk of lung cancer, other works evaluate the effectiveness of the use of the physical activity in the suppression, remission and reduction of the recurrence of tumors. The aim of this study was to evaluate the effects of aerobic and anaerobic physical activity in the development and the progression of lung cancer. Lung tumors were induced with a dose of 3mg of urethane/kg, in 67 male Balb – C type mice, divided in three groups: group 1_24 mice treated with urethane and without physical activity; group 2_25 mice with urethane and subjected to aerobic swimming free exercise; group 3_18 mice with urethane, subjected to anaerobic swimming exercise with gradual loading 5-20% of body weight. All the animals were sacrificed after 20 weeks, and lung lesions were analyzed. The median number of lesions (nodules and hyperplasia) was 3.0 for group 1, 2.0 for group 2 and 1.5-3 (p=0.052). When comparing only the presence or absence of lesion, there was a decrease in the number of lesions in group 3 as compared with group 1 (p=0.03) but not in relation to group 2. There were no metastases or other changes in other organs. The anaerobic physical activity, but not aerobic, diminishes the incidence of experimental lung tumors.

The Fields Institute, in partnership with AARMS, CRM, and PIMS, and with the collaboration of PHAC, VIDO-Intervac, and NRC will receive funding for COVID-19-related research

@ Scientists Linked Artificial and Biological Neurons in a Network—and Amazingly, It Worked – Shelly Fan – 3 months ago @ ABSTRACTIONS BLOG Why Gravity Is Not Like the Other Forces – We asked four physicists why gravity stands out among the forces of nature. We got four different answers.

@ SPACE-TIME How Space and Time Could Be a Quantum Error-Correcting Code – The same codes needed to thwart errors in quantum computers may also give the fabric of space-time its intrinsic robustness. Natalie Wolchover Senior Writer/Editor January 3, 2019

MEDSHIFT RELEASES NEW VERSION OF IOT CONNECTED DEVICE PLATFORM, CONTINUING TO LEAD IN THE MEDICAL TECHNOLOGY FIELD JUNE 09, 2020 / MEDSHIFT

https://phys.org/news/2020-05-physicist-state-room.html?fbclid=IwAR1m1urzMNVsVkfuMvM4Knki_oVUmSwu-uRwwQOkIdTYDKixtiOrDNQyKnU

https://theinternetofthings.report/news/medshift-releases-new-version-of-iot-connected-device-platform-continuing-to-lead-in-the-medical-technology-field/7851?fbclid=IwAR3cFdgljuR0AKjs16mIv6EFTLTYktIJ7g4C9szRODvtI6gtXBAxVjsywlY

https://teletype.in/@biotechnology12/GkS3JvvCM?fbclid=IwAR0pN16TbEtc_JqSdr7ooGMY00ASI1mmC_9oTvQR6IuIO-Lbz3NjKi5HK64

https://www.quantamagazine.org/how-space-and-time-could-be-a-quantum-error-correcting-code-20190103/?fbclid=IwAR08SVAnncZypYqqgjX_DykcklaO-tts4BkTYfbv30zhJN0xU9k364nZqiI

http://news.mit.edu/2020/thousands-artificial-brain-synapses-single-chip-0608

https://www.quantamagazine.org/why-gravity-is-not-like-the-other-forces-20200615/?fbclid=IwAR1cTqYuusuz4iNn1sOWaYcNlkTPAtsS2ty2sOmy6KU4LyxE8fdizgIOQQ0

https://singularityhub.com/2020/03/10/scientists-linked-artificial-and-biological-neurons-in-a-network-and-amazingly-it-worked/amp/?__twitter_impression=true&fbclid=IwAR1zqLVGIRBN3Rw_IXm3XwSMrRst6mNc-bz9f72cpmlJvQhwrSgu99VDWWQ

Removing macrophages shows success against ovarian cancer in mice

http://www.fields.utoronto.ca/news/Mathematical-Modelling-COVID-19-CIHR-awards-666667-to-Fields-Institute-partnership-AARMS-CRM?fbclid=IwAR3mTbYRBvVQUPll3ubSJuOtiMqbBUG6YtKdkNMmfkqq90TeClB5RVtd9Xk

https://www.sciencemag.org/news/2020/03/mathematics-life-and-death-how-disease-models-shape-national-shutdowns-and-other?fbclid=IwAR296Gc2ecHdc1Sv-Ru2RcQPZoDfpTr0Ro8Yj_Ov5IH0ywzYIwO_frwtbkw

 

AAAS7875754billgatescnn8568568468494567975565745745746846584567568456873547357596578659684568568658468456868946846854646845684568468654845687575857546884568546858546754684568754568546845comment8546568745845856845684698656485485648456846845684568458468546845684586584568456546754684567456854comment4684568456456867754675467567546784684568458459868546845684568468

Skip to main content

Click here for free access to our latest coronavirus/COVID-19 research, commentary, and news.

Advertisement

Dutch models of COVID-19 are designed to help prevent overloading of hospitals and the need to transfer patients.

THOMAS ANGUS/IMPERIAL COLLEGE LONDON

Mathematics of life and death: How disease models shape national shutdowns and other pandemic policies

Jacco Wallinga’s computer simulations are about to face a high-stakes reality check. Wallinga is a mathematician and the chief epidemic modeler at the National Institute for Public Health and the Environment (RIVM), which is advising the Dutch government on what actions, such as closing schools and businesses, will help control the spread of the novel coronavirus in the country.

The Netherlands has so far chosen a softer set of measures than most Western European countries; it was late to close its schools and restaurants and hasn’t ordered a full lockdown. In a 16 March speech, Prime Minister Mark Rutte rejected “working endlessly to contain the virus” and “shutting down the country completely.” Instead, he opted for “controlled spread” of the virus among the groups least at risk of severe illness while making sure the health system isn’t swamped with COVID-19 patients. He called on the public to respect RIVM’s expertise on how to thread that needle. Wallinga’s models predict that the number of infected people needing hospitalization, his most important metric, will taper off by the end of the week. But if the models are wrong, the demand for intensive care beds could outstrip supply, as it has, tragically, in Italy and Spain.

COVID-19 isn’t the first infectious disease scientists have modeled—Ebola and Zika are recent examples—but never has so much depended on their work. Entire cities and countries have been locked down based on hastily done forecasts that often haven’t been peer reviewed. “It has suddenly become very visible how much the response to infectious diseases is based on models,” Wallinga says. For the modelers, “it’s a huge responsibility,” says epidemiologist Caitlin Rivers of the Johns Hopkins University Center for Health Security, who co-authored a report about the future of outbreak modeling in the United States that her center released yesterday.

Just how influential those models are became apparent over the past 2 weeks in the United Kingdom. Based partly on modeling work by a group at Imperial College London, the U.K. government at first implemented fewer measures than many other countries—not unlike the strategy the Netherlands is pursuing. Citywide lockdowns and school closures, as China initially mandated, “would result in a large second epidemic once measures were lifted,” a group of modelers that advises the government concluded in a statement. Less severe controls would still reduce the epidemic’s peak and make any rebound less severe, they predicted.

But on 16 March, the Imperial College group published a dramatically revised model that concluded—based on fresh data from the United Kingdom and Italy—that even a reduced peak would fill twice as many intensive care beds as estimated previously, overwhelming capacity. The only choice, they concluded, was to go all out on control measures. At best, strict measures might be periodically eased for short periods, the group said (see graphic, below). The U.K. government shifted course within days and announced a strict lockdown.

Epidemic modelers are the first to admit their projections can be off. “All models are wrong, but some are useful,” statistician George Box supposedly once said—a phrase that has become a cliché in the field.

Modeling a bleak future

U.K. control measures could be let up once in a while, a model suggests, until demand for intensive care unit (ICU) beds hits a threshold.
40008001200Weekly ICU casesMay2020September2020January2021May2021September2021Strict control measure period
IMPERIAL COLLEGE COVID-19 RESPONSE TEAM, ADAPTED BY C. BICKEL/SCIENCE

Textbook mathematics

It’s not that the science behind modeling is controversial. Wallinga uses a well-established epidemic model that divides the Dutch population into four groups, or compartments in the field’s lingo: healthy, sick, recovered, or dead. Equations determine how many people move between compartments as weeks and months pass. “The mathematical side is pretty textbook,” he says. But model outcomes vary widely depending on the characteristics of a pathogen and the affected population.

Because the virus that causes COVID-19 is new, modelers need estimates for key model parameters. These estimates, particularly in the early days of an outbreak, also come from the work of modelers. For instance, by late January several groups had published roughly similar estimates of the number of new infections caused by each infected person when no control measures are taken—a parameter epidemiologists call R0. “This approximate consensus so early in the pandemic gave modelers a chance to warn of this new pathogen’s epidemic and pandemic potential less than 3 weeks after the first Disease Outbreak News report was released by the WHO [World Health Organization] about the outbreak,” says Maia Majumder, a computational epidemiologist at Harvard Medical School whose group produced one of those early estimates.

Wallinga says his team also spent a lot of time estimating R0 for SARS-Cov-2, the virus that causes COVID-19, and feels sure it’s just over two. He is also confident about his estimate that 3 to 6 days elapse between the moment someone is infected and the time they start to infect others. From a 2017 survey of the Dutch population, the RIVM team also has good estimates of how many contacts people of different ages have at home, school, work, and during leisure. Wallinga says he’s least confident about the susceptibility of each age group to infection and the rate at which people of various ages transmit the virus. The best estimates come from a study done in Shenzhen, a city in southern China, he says.

Compartment models assume the population is homogeneously mixed, a reasonable assumption for a small country like the Netherlands. Other modeling groups don’t use compartments but simulate the day-to-day interactions of millions of individuals. Such models are better able to depict heterogeneous countries, such as the United States, or all of Europe. WHO organizes regular calls for COVID-19 modelers to compare strategies and outcomes, Wallinga says: “That’s a huge help in reducing discrepancies between the models that policymakers find difficult to handle.”

Still, models can produce vastly different pictures. A widely publicized, controversial modeling study published yesterday by a group at the University of Oxford argues that the deaths observed in the United Kingdom could be explained by a very different scenario from the currently accepted one. Rather than SARS-CoV-2 spreading in recent weeks and causing severe disease in a significant percentage of people, as most models suggest, the virus might have been spreading in the United Kingdom since January and could have already infected up to half of the population, causing severe disease only in a tiny fraction. Both scenarios are equally plausible, says Sunetra Gupta, the theoretical epidemiologist who led the Oxford work. “I do think it is missing from the thinking that there is an equally big possibility that a lot of us are immune,” she says. The model itself cannot answer the question, she says; only widespread testing for antibodies can, and that needs to be done urgently.

Adam Kucharski, a modeler at the London School of Hygiene & Tropical Medicine, says the Oxford group’s new scenario is unlikely. Scientists don’t know exactly how many people develop very mild symptoms or none at all, he says, but data from the Diamond Princess—a cruise ship docked in Yokohama, Japan, for 2 weeks that had a big COVID-19 outbreak—and from repatriation flights and other sources argue against a huge number of asymptomatic cases. “We don’t know at the moment, is it 50% asymptomatic or is it 20% or 10%,” he says. “I don’t think the question is: Is it 50%  asymptomatic or 99.5%.”

Riding tigers

In their review of U.S. outbreak modeling, Rivers and her colleagues note that most of the key players are academics with little role in policy. They don’t typically “participate in the decision-making processes … they sort of pivot into a new world when an emergency hits,” she says. “It would be more effective if they could be on-site with the government, working side by side with decision makers.” Rivers argues for the creation of a National Infectious Disease Forecasting Center, akin to the National Weather Service. It would be the primary source of models in a crisis and strengthen outbreak science in “peacetime.”

Policymakers have relied too heavily on COVID-19 models, says Devi Sridhar, a global health expert at the University of Edinburgh. “I’m not really sure whether the theoretical models will play out in real life.” And it’s dangerous for politicians to trust models that claim to show how a little-studied virus can be kept in check, says Harvard University epidemiologist William Hanage. “It’s like, you’ve decided you’ve got to ride a tiger,” he says, “except you don’t know where the tiger is, how big it is, or how many tigers there actually are.”

Models are at their most useful when they identify something that is not obvious, Kucharski says. One valuable function, he says, was to flag that temperature screening at airports will miss most coronavirus-infected people.

There’s also a lot that models don’t capture. They cannot anticipate, say, the development of a faster, easier test to identify and isolate infected people or an effective antiviral that reduces the need for hospital beds. “That’s the nature of modeling: We put in what we know,” says Ira Longini, a modeler at the University of Florida. Nor do most models factor in the anguish of social distancing, or whether the public obeys orders to stay home. Recent data from Hong Kong and Singapore suggest extreme social distancing is hard to keep up, says Gabriel Leung, a modeler at the University of Hong Kong. Both cities are seeing an uptick in cases that he thinks stem at least in part from “response fatigue.”  “We were the poster children because we started early. And we went quite heavy,” Leung says. Now, “It’s 2 months already, and people are really getting very tired.” He thinks both cities may be on the brink of a “major sustained local outbreak”.

Long lockdowns to slow a disease can also have catastrophic economic impacts that may themselves affect public health. “It’s a three-way tussle,” Leung says, “between protecting health, protecting the economy, and protecting people’s well-being and emotional health.”

The economic fallout isn’t something epidemic models address, Longini says—but that may have to change. “We should probably hook up with some economic modelers and try to factor that in,” he says.

Read the Latest Issue of Science

Get Our E-Alerts

Receive emails from ScienceSee full list

Country
Country*
Afghanistan
Aland Islands
Albania
Algeria
Andorra
Angola
Anguilla
Antarctica
Antigua and Barbuda
Argentina
Armenia
Aruba
Australia
Austria
Azerbaijan
Bahamas
Bahrain
Bangladesh
Barbados
Belarus
Belgium
Belize
Benin
Bermuda
Bhutan
Bolivia, Plurinational State of
Bonaire, Sint Eustatius and Saba
Bosnia and Herzegovina
Botswana
Bouvet Island
Brazil
British Indian Ocean Territory
Brunei Darussalam
Bulgaria
Burkina Faso
Burundi
Cambodia
Cameroon
Canada
Cape Verde
Cayman Islands
Central African Republic
Chad
Chile
China
Christmas Island
Cocos (Keeling) Islands
Colombia
Comoros
Congo
Congo, The Democratic Republic of the
Cook Islands
Costa Rica
Cote D’Ivoire
Croatia
Cuba
Curaçao
Cyprus
Czech Republic
Denmark
Djibouti
Dominica
Dominican Republic
Ecuador
Egypt
El Salvador
Equatorial Guinea
Eritrea
Estonia
Ethiopia
Falkland Islands (Malvinas)
Faroe Islands
Fiji
Finland
France
French Guiana
French Polynesia
French Southern Territories
Gabon
Gambia
Georgia
Germany
Ghana
Gibraltar
Greece
Greenland
Grenada
Guadeloupe
Guatemala
Guernsey
Guinea
Guinea-Bissau
Guyana
Haiti
Heard Island and Mcdonald Islands
Holy See (Vatican City State)
Honduras
Hong Kong
Hungary
Iceland
India
Indonesia
Iran, Islamic Republic of
Iraq
Ireland
Isle of Man
Israel
Italy
Jamaica
Japan
Jersey
Jordan
Kazakhstan
Kenya
Kiribati
Korea, Democratic People’s Republic of
Korea, Republic of
Kuwait
Kyrgyzstan
Lao People’s Democratic Republic
Latvia
Lebanon
Lesotho
Liberia
Libyan Arab Jamahiriya
Liechtenstein
Lithuania
Luxembourg
Macao
Macedonia, The Former Yugoslav Republic of
Madagascar
Malawi
Malaysia
Maldives
Mali
Malta
Martinique
Mauritania
Mauritius
Mayotte
Mexico
Moldova, Republic of
Monaco
Mongolia
Montenegro
Montserrat
Morocco
Mozambique
Myanmar
Namibia
Nauru
Nepal
Netherlands
New Caledonia
New Zealand
Nicaragua
Niger
Nigeria
Niue
Norfolk Island
Norway
Oman
Pakistan
Palestinian
Panama
Papua New Guinea
Paraguay
Peru
Philippines
Pitcairn
Poland
Portugal
Qatar
Reunion
Romania
Russian Federation
RWANDA
Saint Barthélemy
Saint Helena, Ascension and Tristan da Cunha
Saint Kitts and Nevis
Saint Lucia
Saint Martin (French part)
Saint Pierre and Miquelon
Saint Vincent and the Grenadines
Samoa
San Marino
Sao Tome and Principe
Saudi Arabia
Senegal
Serbia
Seychelles
Sierra Leone
Singapore
Sint Maarten (Dutch part)
Slovakia
Slovenia
Solomon Islands
Somalia
South Africa
South Georgia and the South Sandwich Islands
South Sudan
Spain
Sri Lanka
Sudan
Suriname
Svalbard and Jan Mayen
Swaziland
Sweden
Switzerland
Syrian Arab Republic
Taiwan
Tajikistan
Tanzania, United Republic of
Thailand
Timor-Leste
Togo
Tokelau
Tonga
Trinidad and Tobago
Tunisia
Turkey
Turkmenistan
Turks and Caicos Islands
Tuvalu
Uganda
Ukraine
United Arab Emirates
United Kingdom
United States
Uruguay
Uzbekistan
Vanuatu
Venezuela, Bolivarian Republic of
Vietnam
Virgin Islands, British
Wallis and Futuna
Western Sahara
Yemen
Zambia
Zimbabwe

Email

I also wish to receive emails from AAAS/Science
and Science advertisers, including information on
products, services, and special offers which may
include but are not limited to news, career
information, & upcoming events.Sign up today

Required fields are indicated by an asterisk (*)

 

 

 

 

 

 

 

 The Fields Institute for
Research in Mathematical Sciences

Home

Mathematical Modelling of COVID-19: CIHR awards $666,667 to the Fields Institute in partnership with AARMS, CRM and PIMS.

The Fields Institute, in partnership with AARMS, CRM, and PIMS, and with the collaboration of PHAC, VIDO-Intervac, and NRC will receive funding for COVID-19-related research.

TORONTO, March 25, 2020: The Fields Institute, in partnership with AARMSCRM, and PIMS, and with the collaboration of PHACVIDO-Intervac, and NRC will receive $666,667 for COVID-19-related research. Fields Institute Director Kumar Murty is the principal investigator on the grant funded through the Canadian Institutes of Health Research (CIHR) Canadian 2019 Novel Coronavirus (COVID-19) Rapid Research Funding Opportunity.

Funding decisions were announced March 19 as part of Canada’s emergency efforts to rapidly detect, manage, and reduce the transmission of COVID-19. The grant is the result of an emergency Rapid Response Taskforce workshop held at the Fields Institute on February 14 and 15, 2020.

“The additional teams of researchers receiving funding today will help Canada quickly generate the evidence we need to contribute to the global understanding of the COVID-19 illness.” said The Honourable Patty Hajdu, Minister of Health, in a news release issued by CIHR on March 19. “Their essential work will contribute to the development of effective vaccines, diagnostics, treatments, and public health responses.”

The project will bring together Canadian mathematics institutes, national and international co-investigators, collaborators, and team members, to mobilize a network of infectious disease modellers who will assess transmission risk, predict outbreak trajectories, and evaluate the effectiveness of COVID-19 countermeasures. Six of the 14 co-investigators are faculty at Fields Institute Principal Sponsoring Universities*.

The Directors of AARMS, CRM, PIMS, and Fields are committed to using this grant as a starting point for establishing a continuing national network to study the mathematics of public health and disease modelling.

The two-year grant is one of 96 funded in a concerted effort to understand and control COVID-19. It brings together a truly international team of experts from the following organizations:

MEDIA CONTACT

Esther Berzunza

Manager, Development & Communications

The Fields Institute

eberzunz@fields.utoronto.ca

WHO WE ARE

 

The Fields Institute is a centre for mathematical research activity – a place where mathematicians from Canada and abroad, from academia, business, industry and financial institutions, can come together to carry out research and formulate problems of mutual interest. Our mission is to provide a supportive and stimulating environment for mathematics innovation and education. Learn more about us.

The Fields Institute promotes mathematical activity in Canada and helps to expand the application of mathematics in modern society.

Everyone is welcome to register and participate in events at the Fields Institute.

Find us on the following social media sites.

E-mail: inquiries@fields.utoronto.ca

PEOPLE AND CONTACTS

Contacting the Institute, Staff, and Visiting Members of the Fields Institute.

View Contacts

CALENDAR AND EVENTS

Stay up to date with our upcoming events and news by viewing our calendar.

View Events

222 College Street · Toronto, Ontario · M5T 3J1 · Canada

Sphere Fluidics
NEWS

Removing macrophages shows success against ovarian cancer in mice

By removing two kinds of macrophages in mice, researchers showed that ovarian tumours in mice were reduced in size and stopped spreading.

Macrophage on black background

A new study has demonstrated that it is possible to hinder the spread of ovarian cancer and reduce tumour size in mice.

The researchers, led by scientists at Aarhus University, Denmark, removed some specific immune cells, known as macrophages, from the fat that is stored in the abdominal cavity and hanging in front of the intestines, known as omental fat.

“This is where the omental fat becomes a kind of host for cells which would otherwise perish and our research now shows that when tumour cells move into the omental fat, two specific types of immune cell known as macrophages alter character. They develop into the disease’s small supporters,” said lead researcher Dr Anders Etzerodt, assistant professor of cancer immunology at the Department of Biomedicine at Aarhus University. The researchers explain that ovarian cancer most often occurs in the fallopian tubes and that the starting point for the project was familiar knowledge about cancer cells being able to detach and shed into the abdominal cavity. As this occurs very early in the course of the disease, the ‘homeless’ cancer cells need to fasten onto something to survive.

[LIVE WEBINAR] 2020: Reset your expectation for surface plasmon resonance (SPR)

Save the date for our free webinar supported by Bruker Daltonics which will provide an overview of SPR applications, such as protein-protein, protein-small molecule interactions, fragments, kinetics, epitope binning and mapping. We will emphasise the challenge of significantly increasing throughput while maintaining flexibility and performance, plus much more in our one-hour session!

Date: 
23 April | Time: 15:00 & 18:00 BST

Find out more and register today!

“One of the macrophage types which is already present in the tissue simply begins to help the tumour spread further to the other organs in the abdominal cavity. At the same time, the second type of macrophage, which comes from the bloodstream and is recruited as a reaction to the infiltration of tumour cells into the omental fat, begins to counteract the immune system’s attempt to fight the invasive cancer cells. In this way, they help the tumour to grow larger.”

This project is only the third scientific article to describe how macrophages with different origins affect tumour development”

The researchers experimented with removing the macrophages already found in the tissue, which led them to establish that this inhibited the spread of cancer in the abdominal cavity – though without the tumour in the omental fat becoming smaller. When the researchers simultaneously removed the first type of macrophage mentioned above from the bloodstream, the result was both less spreading and a shrinking tumour.

“We describe a type of immunotherapy which differs from the immunotherapy that is characterised by supporting the T cells that kill a tumour and which has become an established part of modern immunological treatment,” commented Etzerodt. “What we’re doing is also immunotherapy, but it focuses on another part of the immune system. This project is only the third scientific article to describe how macrophages with different origins affect tumour development and precisely how the macrophages that are found to inhibit the immune system’s ability to hamper the cancer can be removed.”

The researchers also found the new types of macrophages using a new technique called single-cell sequencing, a method which gives the researchers very detailed information about the processes that take place in each individual cell.

According to Etzerodt, their results have obvious potential for improved treatment in the future; the next step is to develop a medicine which can be tested on people. This is particularly interesting, say the researchers, because the group has previously shown that similar macrophages from the bloodstream are also present in models for skin cancer.

“So far, we’ve gained a new and deeper understanding of what is helping and what is hindering the body in the development of ovarian cancer and I’m looking forward to testing this in clinical trials on patients who currently have a really poor prognosis,” Etzerodt said.

The study was published in the Journal of Experimental Medicine.

Scientists Linked Artificial and Biological Neurons in a Network—and Amazingly, It Worked

<img class=”i-amphtml-intrinsic-sizer” style=”max-width: 100%; display: block !important;” role=”presentation” src=”data:;base64,” alt=”” aria-hidden=”true” />neurons artificial biological link neuroscience
AddThis Website Tools

Scientists have linked up two silicon-based artificial neurons with a biological one across multiple countries into a fully-functional network. Using standard internet protocols, they established a chain of communication whereby an artificial neuron controls a living, biological one, and passes on the info to another artificial one.

Whoa.

We’ve talked plenty about brain-computer interfaces and novel computer chips that resemble the brain. We’ve covered how those “neuromorphic” chips could link up into tremendously powerful computing entities, using engineered communication nodes called artificial synapses.

As Moore’s law is dying, we even said that neuromorphic computing is one path towards the future of extremely powerful, low energy consumption artificial neural network-based computing—in hardware—that could in theory better link up with the brain. Because the chips “speak” the brain’s language, in theory they could become neuroprosthesis hubs far more advanced and “natural” than anything currently possible.

This month, an international team put all of those ingredients together, turning theory into reality.

The three labs, scattered across Padova, Italy, Zurich, Switzerland, and Southampton, England, collaborated to create a fully self-controlled, hybrid artificial-biological neural network that communicated using biological principles, but over the internet.

The three-neuron network, linked through artificial synapses that emulate the real thing, was able to reproduce a classic neuroscience experiment that’s considered the basis of learning and memory in the brain. In other words, artificial neuron and synapse “chips” have progressed to the point where they can actually use a biological neuron intermediary to form a circuit that, at least partially, behaves like the real thing.

That’s not to say cyborg brains are coming soon. The simulation only recreated a small network that supports excitatory transmission in the hippocampus—a critical region that supports memory—and most brain functions require enormous cross-talk between numerous neurons and circuits. Nevertheless, the study is a jaw-dropping demonstration of how far we’ve come in recreating biological neurons and synapses in artificial hardware.

And perhaps one day, the currently “experimental” neuromorphic hardware will be integrated into broken biological neural circuits as bridges to restore movement, memory, personality, and even a sense of self.

The Artificial Brain Boom

One important thing: this study relies heavily on a decade of research into neuromorphic computing, or the implementation of brain functions inside computer chips.

The best-known example is perhaps IBM’s TrueNorth, which leveraged the brain’s computational principles to build a completely different computer than what we have today. Today’s computers run on a von Neumann architecture, in which memory and processing modules are physically separate. In contrast, the brain’s computing and memory are simultaneously achieved at synapses, small “hubs” on individual neurons that talk to adjacent ones.

Because memory and processing occur on the same site, biological neurons don’t have to shuttle data back and forth between processing and storage compartments, massively reducing processing time and energy use. What’s more, a neuron’s history will also influence how it behaves in the future, increasing flexibility and adaptability compared to computers. With the rise of deep learning, which loosely mimics neural processing as the prima donna of AI, the need to reduce power while boosting speed and flexible learning is becoming ever more tantamount in the AI community.

Neuromorphic computing was partially born out of this need. Most chips utilize special ingredients that change their resistance (or other physical characteristics) to mimic how a neuron might adapt to stimulation. Some chips emulate a whole neuron, that is, how it responds to a history of stimulation—does it get easier or harder to fire? Others imitate synapses themselves, that is, how easily they will pass on the information to another neuron.

Although single neuromorphic chips have proven to be far more efficient and powerful than current computer chips running machine learning algorithms in toy problems, so far few people have tried putting the artificial components together with biological ones in the ultimate test.

That’s what this study did.

A Hybrid Network

Still with me? Let’s talk network.

It’s gonna sound complicated, but remember: learning is the formation of neural networks, and neurons that fire together wire together. To rephrase: when learning, neurons will spontaneously organize into networks so that future instances will re-trigger the entire network. To “wire” together, downstream neurons will become more responsive to their upstream neural partners, so that even a whisper will cause them to activate. In contrast, some types of stimulation will cause the downstream neuron to “chill out” so that only an upstream “shout” will trigger downstream activation.

Both these properties—easier or harder to activate downstream neurons—are essentially how the brain forms connections. The “amping up,” in neuroscience jargon, is long-term potentiation (LTP), whereas the down-tuning is LTD (long-term depression). These two phenomena were first discovered in the rodent hippocampus more than half a century ago, and ever since have been considered as the biological basis of how the brain learns and remembers, and implicated in neurological problems such as addition (seriously, you can’t pass Neuro 101 without learning about LTP and LTD!).

So it’s perhaps especially salient that one of the first artificial-brain hybrid networks recapitulated this classic result.

To visualize: the three-neuron network began in Switzerland, with an artificial neuron with the badass name of “silicon spiking neuron.” That neuron is linked to an artificial synapse, a “memristor” located in the UK, which is then linked to a biological rat neuron cultured in Italy. The rat neuron has a “smart” microelectrode, controlled by the artificial synapse, to stimulate it. This is the artificial-to-biological pathway.

Meanwhile, the rat neuron in Italy also has electrodes that listen in on its electrical signaling. This signaling is passed back to another artificial synapse in the UK, which is then used to control a second artificial neuron back in Switzerland. This is the biological-to-artificial pathway back. As a testimony in how far we’ve come in digitizing neural signaling, all of the biological neural responses are digitized and sent over the internet to control its far-out artificial partner.

Here’s the crux: to demonstrate a functional neural network, just having the biological neuron passively “pass on” electrical stimulation isn’t enough. It has to show the capacity to learn, that is, to be able to mimic the amping up and down-tuning that are LTP and LTD, respectively.

You’ve probably guessed the results: certain stimulation patterns to the first artificial neuron in Switzerland changed how the artificial synapse in the UK operated. This, in turn, changed the stimulation to the biological neuron, so that it either amped up or toned down depending on the input.

Similarly, the response of the biological neuron altered the second artificial synapse, which then controlled the output of the second artificial neuron. Altogether, the biological and artificial components seamlessly linked up, over thousands of miles, into a functional neural circuit.

Cyborg Mind-Meld

So…I’m still picking my jaw up off the floor.

It’s utterly insane seeing a classic neuroscience learning experiment repeated with an integrated network with artificial components. That said, a three-neuron network is far from the thousands of synapses (if not more) needed to truly re-establish a broken neural circuit in the hippocampus, which DARPA has been aiming to do. And LTP/LTD has come under fire recently as the de facto brain mechanism for learning, though so far they remain cemented as neuroscience dogma.

However, this is one of the few studies where you see fields coming together. As Richard Feynman famously said, “What I cannot recreate, I cannot understand.” Even though neuromorphic chips were built on a high-level rather than molecular-level understanding of how neurons work, the study shows that artificial versions can still synapse with their biological counterparts. We’re not just on the right path towards understanding the brain, we’re recreating it, in hardware—if just a little.

While the study doesn’t have immediate use cases, practically it does boost both the neuromorphic computing and neuroprosthetic fields.

“We are very excited with this new development,” said study author Dr. Themis Prodromakis at the University of Southampton. “On one side it sets the basis for a novel scenario that was never encountered during natural evolution, where biological and artificial neurons are linked together and communicate across global networks; laying the foundations for the Internet of Neuro-electronics. On the other hand, it brings new prospects to neuroprosthetic technologies, paving the way towards research into replacing dysfunctional parts of the brain with AI chips.”

Image Credit: Gerd Altmann from Pixabay

AddThis Website Tools
ABSTRACTIONS BLOG

Why Gravity Is Not Like the Other Forces

We asked four physicists why gravity stands out among the forces of nature. We got four different answers.
39
A falling apple.

Physicists still ponder why, exactly, the apple falls.

Samuel Velasco/Quanta Magazine

Physicists have traced three of the four forces of nature — the electromagnetic force and the strong and weak nuclear forces — to their origins in quantum particles. But the fourth fundamental force, gravity, is different.

Our current framework for understanding gravity, devised a century ago by Albert Einstein, tells us that apples fall from trees and planets orbit stars because they move along curves in the space-time continuum. These curves are gravity. According to Einstein, gravity is a feature of the space-time medium; the other forces of nature play out on that stage.

But near the center of a black hole or in the first moments of the universe, Einstein’s equations break. Physicists need a truer picture of gravity to accurately describe these extremes. This truer theory must make the same predictions Einstein’s equations make everywhere else.

Physicists think that in this truer theory, gravity must have a quantum form, like the other forces of nature. Researchers have sought the quantum theory of gravity since the 1930s. They’ve found candidate ideas — notably string theory, which says gravity and all other phenomena arise from minuscule vibrating strings — but so far these possibilities remain conjectural and incompletely understood. A working quantum theory of gravity is perhaps the loftiest goal in physics today.

What is it that makes gravity unique? What’s different about the fourth force that prevents researchers from finding its underlying quantum description? We asked four different quantum gravity researchers. We got four different answers.

Gravity Breeds Singularities

Claudia de Rham, a theoretical physicist at Imperial College London, has worked on theories of massive gravity, which posit that the quantized units of gravity are massive particles:

Einstein’s general theory of relativity correctly describes the behavior of gravity over close to 30 orders of magnitude, from submillimeter scales all the way up to cosmological distances. No other force of nature has been described with such precision and over such a variety of scales. With such a level of impeccable agreement with experiments and observations, general relativity could seem to provide the ultimate description of gravity. Yet general relativity is remarkable in that it predicts its very own fall.

General relativity yields the predictions of black holes and the Big Bang at the origin of our universe. Yet the “singularities” in these places, mysterious points where the curvature of space-time seems to become infinite, act as flags that signal the breakdown of general relativity. As one approaches the singularity at the center of a black hole, or the Big Bang singularity, the predictions inferred from general relativity stop providing the correct answers. A more fundamental, underlying description of space and time ought to take over. If we uncover this new layer of physics, we may be able to achieve a new understanding of space and time themselves.

If gravity were any other force of nature, we could hope to probe it more deeply by engineering experiments capable of reaching ever-greater energies and smaller distances. But gravity is no ordinary force. Try to push it into unveiling its secrets past a certain point, and the experimental apparatus itself will collapse into a black hole.

Gravity Leads to Black Holes

Daniel Harlow, a quantum gravity theorist at the Massachusetts Institute of Technology, is known for applying quantum information theory to the study of gravity and black holes:

Black holes are the reason it’s difficult to combine gravity with quantum mechanics. Black holes can only be a consequence of gravity because gravity is the only force that is felt by all kinds of matter. If there were any type of particle that did not feel gravity, we could use that particle to send out a message from the inside of the black hole, so it wouldn’t actually be black.

The fact that all matter feels gravity introduces a constraint on the kinds of experiments that are possible: Whatever apparatus you construct, no matter what it’s made of, it can’t be too heavy, or it will necessarily gravitationally collapse into a black hole. This constraint is not relevant in everyday situations, but it becomes essential if you try to construct an experiment to measure the quantum mechanical properties of gravity.

Our understanding of the other forces of nature is built on the principle of locality, which says that the variables that describe what’s going on at each point in space — such as the strength of the electric field there — can all change independently. Moreover, these variables, which we call “degrees of freedom,” can only directly influence their immediate neighbors. Locality is important to the way we currently describe particles and their interactions because it preserves causal relationships: If the degrees of freedom here in Cambridge, Massachusetts, depended on the degrees of freedom in San Francisco, we may be able to use this dependence to achieve instantaneous communication between the two cities or even to send information backward in time, leading to possible violations of causality.

The hypothesis of locality has been tested very well in ordinary settings, and it may seem natural to assume that it extends to the very short distances that are relevant for quantum gravity (these distances are small because gravity is so much weaker than the other forces). To confirm that locality persists at those distance scales, we need to build an apparatus capable of testing the independence of degrees of freedom separated by such small distances. A simple calculation shows, however, that an apparatus that’s heavy enough to avoid large quantum fluctuations in its position, which would ruin the experiment, will also necessarily be heavy enough to collapse into a black hole! Therefore, experiments confirming locality at this scale are not possible. And quantum gravity therefore has no need to respect locality at such length scales.

Indeed, our understanding of black holes so far suggests that any theory of quantum gravity should have substantially fewer degrees of freedom than we would expect based on experience with the other forces. This idea is codified in the “holographic principle,” which says, roughly speaking, that the number of degrees of freedom in a spatial region is proportional to its surface area instead of its volume.

Gravity Creates Something From Nothing

Juan Maldacena, a quantum gravity theorist at the Institute for Advanced Study in Princeton, New Jersey, is best known for discovering a hologram-like relationship between gravity and quantum mechanics:

Particles can display many interesting and surprising phenomena. We can have spontaneous particle creation, entanglement between the states of particles that are far apart, and particles in a superposition of existence in multiple locations.

In quantum gravity, space-time itself behaves in novel ways. Instead of the creation of particles, we have the creation of universes. Entanglement is thought to create connections between distant regions of space-time. We have superpositions of universes with different space-time geometries.

Furthermore, from the perspective of particle physics, the vacuum of space is a complex object. We can picture many entities called fields superimposed on top of one another and extending throughout space. The value of each field is constantly fluctuating at short distances. Out of these fluctuating fields and their interactions, the vacuum state emerges. Particles are disturbances in this vacuum state. We can picture them as small defects in the structure of the vacuum.

When we consider gravity, we find that the expansion of the universe appears to produce more of this vacuum stuff out of nothing. When space-time is created, it just happens to be in the state that corresponds to the vacuum without any defects. How the vacuum appears in precisely the right arrangement is one of the main questions we need to answer to obtain a consistent quantum description of black holes and cosmology. In both of these cases there is a kind of stretching of space-time that results in the creation of more of the vacuum substance.

Gravity Can’t Be Calculated

Sera Cremonini, a theoretical physicist at Lehigh University, works on string theory, quantum gravity and cosmology:

There are many reasons why gravity is special. Let me focus on one aspect, the idea that the quantum version of Einstein’s general relativity is “nonrenormalizable.” This has implications for the behavior of gravity at high energies.

In quantum theories, infinite terms appear when you try to calculate how very energetic particles scatter off each other and interact. In theories that are renormalizable — which include the theories describing all the forces of nature other than gravity — we can remove these infinities in a rigorous way by appropriately adding other quantities that effectively cancel them, so-called counterterms. This renormalization process leads to physically sensible answers that agree with experiments to a very high degree of accuracy.

The problem with a quantum version of general relativity is that the calculations that would describe interactions of very energetic gravitons — the quantized units of gravity — would have infinitely many infinite terms. You would need to add infinitely many counterterms in a never-ending process. Renormalization would fail. Because of this, a quantum version of Einstein’s general relativity is not a good description of gravity at very high energies. It must be missing some of gravity’s key features and ingredients.

However, we can still have a perfectly good approximate description of gravity at lower energies using the standard quantum techniques that work for the other interactions in nature. The crucial point is that this approximate description of gravity will break down at some energy scale — or equivalently, below some length.

Above this energy scale, or below the associated length scale, we expect to find new degrees of freedom and new symmetries. To capture these features accurately we need a new theoretical framework. This is precisely where string theory or some suitable generalization comes in: According to string theory, at very short distances, we would see that gravitons and other particles are extended objects, called strings. Studying this possibility can teach us valuable lessons about the quantum behavior of gravity.

Comment on this article

Show comments

5 Comments

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

transbiotex.wordpress.com/

Personal Cellular Storage for Regenerative Medicine

WordPress.com em Português (Brasil)

As últimas notícias do WordPress.com e da comunidade WordPress

%d bloggers like this: