Very Important Videos, Links, Social Networkings and Websites of the World @ Time – Tempo(s) & Scientists regenerate retinal cells in mice & Why do Two Genetically Identical Mice Look Vastly Different? & A Tale of Two Mice July 2008 & Mouse Heart Bioreactor for Heart Valve Research – Animation @ Yuriana Aguilar on Her Biomedical Research With Mouse Hearts & White mouse use for biochemistry research @ ´´At the University of Manchester, a team under the leadership of Tom Kilburn designed and built a machine using the newly developed transistors instead of valves.[52] Their first transistorised computer and the first in the world, was operational by 1953, and a second version was completed there in April 1955.´´ & https://en.wikipedia.org/wiki/Computer

However, the machine did make use of valves to generate its 125 kHz clock waveforms and in the circuitry to read and write on its magnetic drum memory, so it was not the first completely transistorized computer. That distinction goes to the Harwell CADET of 1955,[53] built by the electronics division of the Atomic Energy Research Establishment at Harwell.[53][54] MOSFET (MOS transistor), showing gate (G), body (B), source (S) and drain (D) terminals. The gate is separated from the body by an insulating layer (pink). The metal–oxide–silicon field-effect transistor (MOSFET), also known as the MOS transistor, was invented by Mohamed M. Atalla and Dawon Kahng at Bell Labs in 1959.[55] It was the first truly compact transistor that could be miniaturised and mass-produced for a wide range of uses.[51] With its high scalability,[56] and much lower power consumption and higher density than bipolar junction transistors,[57] the MOSFET made it possible to build high-density integrated circuits.[58][59] In addition to data processing, it also enabled the practical use of MOS transistors as memory cell storage elements, leading to the development of MOS semiconductor memory, which replaced earlier magnetic-core memory in computers.[60] The MOSFET led to the microcomputer revolution,[61] and became the driving force behind the computer revolution.[62][63] The MOSFET is the most widely used transistor in computers,[64][65] and is the fundamental building block of digital electronics.[66]´´
 

After Transplant Procedure, Man’s Semen Contains Only Donor’s DNA – @ https://lifeboat.com/blog/2019/12/after-transplant-procedure-mans-semen-contains-only-donors-dna?fbclid=IwAR3oiL91uzGRn0kTmHKKtbFpFWnPWXQXoGN86a5soPIgJctsz4Drkfr7Hfo

Why human health must be at the center of climate action https://www.greenbiz.com/article/why-human-health-must-be-center-climate-action

Phase I dose-escalation study to determine the safety, tolerability, preliminary efficacy and pharmacokinetics of an intratumoral injection of tigilanol tiglate (EBC-46) https://www.sciencedirect.com/science/article/pii/S2352396419307868?fbclid=IwAR3vkrhKoXOtnLH-cZrHYrxJAqqkX9gVVlNiR6UWh28ODqVX2Ed8SRphi2E

Oxygen shaped the evolution of the eye https://phys.org/news/2019-12-oxygen-evolution-eye.html?fbclid=IwAR1D5XXUXAEyyrrexM7A0zJkl7bnnth8O5Aee3YBMohH31w8zaCGZT5_TEc

Intel says this breakthrough will make quantum computing more practical https://lifeboat.com/blog/2019/12/intel-says-this-breakthrough-will-make-quantum-computing-more-practical?fbclid=IwAR0MM2hrTLcmkA1kVk5VJNrTg13a3XHo4i17jkGzDyVy9Ay2h3OWf7G6od0

Musk plans human Mars missions as soon as 2024 https://spacenews.com/musk-plans-human-mars-missions-as-soon-as-2024/?fbclid=IwAR0S3eFdjTNwvVJnbuZuf_s0HG-lHjf7nf9yEKwOtlZ9kVXoXvvXMRHthEw

FDA approves first fish-oil drug for cutting cardiac risks

In patient testing, the drug reduced risks of potentially deadly complications including heart attacks and strokes about 25 percent. https://www.nbcnews.com/health/heart-health/fda-approves-first-fish-oil-drug-cutting-cardiac-risks-n1102026

Astronauts just printed meat in space for the first time — and it could change the way we grow food on Earth https://www.businessinsider.com/meat-grown-in-space-with-3d-printer-2019-10?fbclid=IwAR1z_vtjUsIq6bgsa3AVzTMjtgghb5hGu2UmsU4AU1PFmSOirNxl6eEfjH0

A Harvard geneticist’s goal: to protect humans from viruses, genetic diseases, and aging

George Church’s lab at Harvard Medical School is working to make humans immune to all viruses, eliminate genetic diseases and reverse the aging process. Scott Pelley reports on how close the geneticist’s team is to a breakthrough. https://www.cbsnews.com/news/harvard-geneticist-george-church-goal-to-protect-humans-from-viruses-genetic-diseases-and-aging-60-minutes-2019-12-08/?fbclid=IwAR3IZ1DlFCWJmw-J5H0yuj0PClnXZ-hKpHQUw3hhmQHeB4tR4GxHndLHO-c

NASA has pinpointed an area where astronauts could land on Mars. Ice is so accessible there that they could dig it up with a shovel. https://www.businessinsider.com/nasa-spot-on-mars-with-ice-where-humans-could-land-2019-12?utm_content=bufferef4e2&utm_medium=social&utm_source=facebook.com&utm_campaign=buffer&fbclid=IwAR2AzjOUIMVURAkjMxuNXdRKr5qLjrJMmSHhrATKXNpu5BczzcwSRYjeKPc

NASA ENGINEER SAYS NEW THRUSTER COULD REACH 99% SPEED OF LIGHT https://futurism.com/the-byte/nasa-thruster-99-speed-light?fbclid=IwAR28vKUvf8fPgDf5C4X2z6MhqLHNTFTbgDTIes1T9mPjVwO_kBOG_U7w4fA

Physicists Use Bubbling Quantum Vacuum to Hopscotch Heat Across Empty Space https://www.livescience.com/quantum-vacuum-sends-heat.html?fbclid=IwAR0LAtC6TCQMNR6Sqrr_zcWkO8VICbhWjRpwarCnDLlPVK31jvRwyPOKY3s

Innovative Technologies & Techniques | Update on Diagnosis and Treatment of Abdominal Aortic Aneurysm (AAA) in Women https://www.texasheart.org/innovative-technologies-techniques-update-on-diagnosis-and-treatment-of-abdominal-aortic-aneurysm-aaa-in-women/?fbclid=IwAR1c5OENs68TzJQh9oeIDaCi6SCcUJWKpPh2eKMwlACdhglfTjEVBkpFQpw

Imagining the future: Why society needs science fiction http://www.thestargarden.co.uk/Why-society-needs-science-fiction.html?fbclid=IwAR3EoHT-hf_chW8MwD5qrxZJ4S6bp4kzL5gdQl6o1c3uvx6xVyZrrRcZ07k

100 incredible years of physics and the IOP https://beta.iop.org/iop100?fbclid=IwAR1xU_4FzeBylZdYXWYOkRCwxMLtHs5YCi_WP0KWqnMSSZSyhm-ZFsn2FKQ

Heat Energy Has Leapt Across an Empty Vacuum Thanks to a Weird Quantum Effect https://www.sciencealert.com/heat-energy-has-leapt-across-an-empty-vacuum-thanks-to-a-weird-quantum-effect?fbclid=IwAR37AkG6su_07DyeZmlMYC2wSxgrZDrMO7VmFh2f89mbP_okVFjowf7eVQ8

Study maps abundance of plastic debris across European and Asian rivers https://ioppublishing.org/news/study-maps-abundance-of-plastic-debris-across-european-and-asian-rivers/

Reinforcement Learning https://www.newworldai.com/reinforcement-learning/?fbclid=IwAR3n8PvLeDmwH9Zr0BbYpelxTIAc8xnSsLdGCIZ7oq6iKNBWtkPpzzYOHMs

How Quantum Computers Will Break Your Phone’s Encryption https://medium.com/cantors-paradise/how-quantum-computers-will-break-your-phones-encryption-54880dd4b346

EACR-OECI Joint Conference: Molecular Pathology Approach to Cancer

Lisbon, Portugal : 18 – 20 May 2020 https://www.eacr.org/conference/molecularpathology2020/index?utm_source=socialmedia&utm_medium=web&utm_campaign=MolPath20&utm_content=

HARVARDDigital Collections https://library.harvard.edu/digital-collections?fbclid=IwAR0PmqAQKCjGEWcqnMNdHMu7Bhl1LBDbZbncxvnSfwYsxtSXbHCjeMCRSlo

How a Wild Theory About Nelson Mandela Proves the Existence of Parallel Universes https://bigthink.com/paul-ratner/how-the-mandela-effect-phenomenon-explains-the-existence-of-alternate-realities?xrs=RebelMouse_fb&ts=1548809727&fbclid=IwAR2ZRUxBCOdVsDxB7os6xOIez8R12a_k5u4h0E2zCcsNaB_MT99fzj_soow

Ethics and Negotiation: 5 Principles of Negotiation to Boost Your Bargaining Skills in Business Situations

How to use the principles behind negotiation ethics to create win-win agreements for you and your bargaining counterpart https://www.pon.harvard.edu/daily/negotiation-training-daily/questions-of-ethics-in-negotiation/?fbclid=IwAR1qURbn8HUIoNA5aDHydGw0SQp-WgziF-8xuOJsv0kNTn4KTCO5VNAutgk

The History of Artificial Intelligence [Documentary] https://www.youtube.com/watch?v=CCVRBDv4YI0&fbclid=IwAR1Cx3aVEYq3wgORWJkqYhBec7YT-IQT8T8AM6RDSWEFd9mGaAPVY6X16bY

The many worlds of quantum reality with Sean Carroll https://www.youtube.com/watch?v=gpEvv349Pyk&feature=youtu.be&fbclid=IwAR15SPSQVpyHQlS4poS-D2jqDNZ0YqgeCAHCDGLP7Ilh1AUaHEAMr2RFzKE

DATA SCIENCE AND BIG DATA ANALYTICS: MAKING DATA-DRIVEN DECISIONS

Turn big data into even bigger results with a seven-week online course from MIT. https://learn-xpro.mit.edu/data-science?utm_medium=paidsocial&utm_source=facebook&utm_campaign=dsx-r12-sp20&utm_content=fb-b-lt-dm&fbclid=IwAR1tIWV8SbrWuMKzBwsVZFBpGrstGBALuL7kG4SA-0Yaugl_MdxCkGjJhkU

https://www.2invest.com/en/contactus

Google’s quantum supremacy algorithm has found its first practical use

Read more: https://www.newscientist.com/article/2227490-googles-quantum-supremacy-algorithm-has-found-its-first-practical-use/#ixzz687hUbzqD @ https://www.newscientist.com/article/2227490-googles-quantum-supremacy-algorithm-has-found-its-first-practical-use/?fbclid=IwAR2Z7t1cOS9RGV9E5FAEzt1fi46fPu3UH4YcJejViOFtzxk8fihheEdwXwk

Discover the physics of the sun with the Parker solar probe

The research team provides clues to the processes behind the release of material from the sun. https://www.sciencecover.com/1272-2-discover-the-physics-of-the-sun-with-the-parker-solar-probe/?fbclid=IwAR1D1pqnRU30VhSHtHhUVv8Z8tii7EScA3b_2r4gbPOnP5J-ybk5tDicJqY

2Invest

ENloginDemoLive account

Tutorials

<

Need assistance?
We are here to help.

 +55 6140427663
 +54 1159842577
 +56 732451299
 +52 7449800691
 +51 17055876
 +507 8365260
 +506 40001721
 +182 99466205
support@2invest.comLive chat

2Invest

The website is operated by

https://2invest-fo-content.s3.amazonaws.com/Content/Images/Pages/habonix.PNG

, a company bearing registration number HE396742 and having its registered address at 

 and by 

, a limited liability company incorporated under the laws of Seychelles, bearing company registration number 8417765-1, regulated by the Seychelles Financial Services Authority (FSA), license number SD014 and having its registered address at Suite C, Orion Mall, Victoria Mahe, Seychelles.

Risk Warning:
CFDs and derivatives are leveraged products and involve high level of risk. Trading on such financial instruments may result in losing all of your invested capital. Past performance of financial instruments does not guarantee future performance. Please ensure that you fully understand the risks involved before making any trading decisions. Our risk disclosure can be found in our terms and conditions.

2Invest

ENloginDemoLive account

Tutorials

<

METATRADER TUTORIALS

  • MetaTrader installation and BasicsLearn how to setup the platform and get familiar with the main features: Installation and opening an account, login and changing the language, Market Watch Window, new charts, chart properties and time frames.
  • Orders and the Terminal windowOpening, Closing and modifying orders, the Terminal Window, account history, reports, alerts and additional options.
  • Using Types of OrdersPending orders, trailing stop and changing the profit display.
  • Adding Indicators to the GraphsUsing objects, adding indicators, help lines, editing and deleting objects.
  • PropertiesSome very important configuration features: Display definitions, Patterns, Profiles.

Need assistance?
We are here to help.

 +55 6140427663
 +54 1159842577
 +56 732451299
 +52 7449800691
 +51 17055876
 +507 8365260
 +506 40001721
 +182 99466205
support@2invest.comLive chat

2Invest

The website is operated by

https://2invest-fo-content.s3.amazonaws.com/Content/Images/Pages/habonix.PNG

, a company bearing registration number HE396742 and having its registered address at 

 and by 

, a limited liability company incorporated under the laws of Seychelles, bearing company registration number 8417765-1, regulated by the Seychelles Financial Services Authority (FSA), license number SD014 and having its registered address at Suite C, Orion Mall, Victoria Mahe, Seychelles.

Risk Warning:
CFDs and derivatives are leveraged products and involve high level of risk. Trading on such financial instruments may result in losing all of your invested capital. Past performance of financial instruments does not guarantee future performance. Please ensure that you fully understand the risks involved before making any trading decisions. Our risk disclosure can be found in our terms and conditions.

       Sunday, December 15, 2019 

ScienceCover

HomeSpace  Discover the physics of the sun with the Parker solar probe

Discover the physics of the sun with the Parker solar probe

The research team provides clues to the processes behind the release of material from the sun.By Kamesh -December 14, 2019 Sharing is caring

Nasa researchers Discover the physics of the sun with the Parker solar probe, For almost a year and a half, Parker Solar Probe has provided gigabytes of data about the sun and its atmosphere.

The research team provides clues to the processes behind the release of material from the sun, solar wind and solar storms that are less common that can disrupt technology and harm astronauts, as well as new ideas from cosmic dust that create meteorite rains from Geminid.

The solar wind carries the sun’s magnetic field with it and forms cosmic time throughout the solar system as it flows from the sun at a speed of about one million miles per hour. Some of Parker Solar Probe’s main scientific goals are to identify the mechanism that drives solar wind into space at high speed.

Animation of data from the WISPR instrument on Parker Solar Probe. The Sun is at the left of the animation, and Jupiter is highlighted in red. Credits: Naval Research Laboratory/Johns Hopkins Applied Physics Lab

One clue is solar wind disturbance, which can direct processes that heat and speed up the wind. This structural bag of dense material has recorded in data from previous missions of several decades.

They are many times larger than the earth’s magnetic field and extend more than tens of thousands of kilometers in space. This means that this structure can globally compress the Earth’s magnetic field if it collides with it.

Parker’s solar probe measures the structure of the solar wind better than before near the sun. Both distant images and in-situ instruments used to measure structures as they travel across spacecraft.

NASA’s STEREO-A spacecraft, with its unique vantage point away from Earth, observed the Sun’s outer atmosphere as Parker Solar Probe flew through it in November 2018, giving scientists another perspective on structures in this region. Credits: NASA/STEREO/Angelos Vourlidas

From his perspective, around 90 degrees from Earth, STEREO-A can see the crown area where Parker is flying, allowing Viall to combine measurements in new ways and get a better view of the solar wind structure as they flow from the sun

Also to images from Parker’s solar investigation, scientists can now better examine magnetic disturbances in the solar wind.

The Parker instrument also sheds new light on processes unseen in the solar wind, revealing an active system near the sun.

Parker Solar Probe measured sudden reversals in the Sun’s magnetic field. These events, called “switchbacks,” may provide clues to the processes that heat the Sun’s outer atmosphere to millions of degrees. Credits: NASA/GSFC/CIL/Adriana Manrique Gutierrez

The exact origin of the switches is uncertain, but they can be signatures of a process that heats the sun’s outer atmosphere, crown, up to millions of degrees, hundreds of times hotter than the surface visible underneath.

The reason for this opposite temperature rise is a long-standing topic in solar technology, which called the secret of coronal heating and related to the way the solar wind feeds and accelerates.

Together with the solar wind, the sun also emits a separate cloud of material, called the coronal mass ejection or CME. WEE is denser and sometimes faster than solar winds and can also have cosmic meteorological effects on Earth or cause satellite problems.

CME is known to be difficult to predict. Some of them are completely invisible from Earth or from STEREO-A – two positions where we have tools that can be recognized by the CME from a distance because they come out of the sun from the perspective of both spacecraft.

Even if they discovered by instruments, it is not always possible to predict which CME will disrupt the earth’s magnetic field and cause cosmic meteorological effects because magnetic structures play an important role in cloud material.

Our best experience in understanding the magnetic properties of CME based on the accurate determination of the solar region from which CME originates, which means that an outbreak of a type called CME stealth poses a unique challenge for weather forecasting.

Stealth CME saw in coronographic instruments that only see the sun’s outer atmosphere, but do not leave clear signatures for image breakdown on solar discs, making it difficult to determine exactly where they came from.

But, during the first Parker Solar Probe solar source in November 2018, the spacecraft hit by one of these STE CME.

This measurement not only provides a glimpse of CME near the sun but can also help scientists trace STEMS back to its source.

Understanding how solar fire produces populations of seed particles that eat particle events can help you predict when such events can occur and improve their model of movement in space.

The WISPR instrument from Parker Solar Probe designed to capture detailed images of weak crowns and solar wind. But they also built another structure that was hard to see: a dusty road that was 10,000 km behind the asteroid orbit of Phaethon and produced Geminid meteor showers.

These traces of pepper dust adorn the Earth’s atmosphere when our planet crosses the Phaeton orbit every December and the great show we call Geminid burns and produced.

Although scientists have long known that Phaeton was the mother of the twins, it was never possible to see the true traces of dust. It is very weak and very close to the sun in the sky and, despite some efforts, has never raised by a telescope before. But, WISPR developed to detect weak structures near the sun. The first direct look at the dust trail of WISPR provides new information about its properties.

With three orbits, Parker’s investigation will continue to explore the sun to find 21 sunflowers that close.

The next orbital change will take place during the Venus flight on December 26, which will bring Parker about 18.6 million miles from the surface of the Sun to the next approach to the Sun on January 29, 2020. He will never do this indirect measurement, with a more measurable environment up close than ever before, we can hope to learn more about this phenomenon and discover new problems.

Parker Solar Probe is part of NASA’s Living with the Star program to explore aspects of the Earth’s Sun system that have a direct impact on life and society.

The Live with the Star program managed by Godard Space Flight Center in Greenbelt, Maryland for NASA’s Science Mission Directorate in Washington. Johns Hopkins APL designs build and operate spacecraft.SOURCEnasa goddard Sharing is caringPrevious articleNASA’s NICER offers the best pulsar measurementsNext articleIs there dark matter in the middle of the Milky Way?

KameshKamesh is a search marketing specialist and he helps contribute to the blog with science content and research news and he also a good editor.

RELATED ARTICLESMORE FROM AUTHOR

Space

Is there dark matter in the middle of the Milky Way?

Space

NASA’s NICER offers the best pulsar measurements

Space

ALMA looks at the farthest dusty galaxies seen

Space

NASA’s Juno Navigator allows detection of Jupiter Cyclone

Space

Nasa scientists plans to send air ballon to venus

Space

Heat energy jumps over empty space

LEAVE A REPLY

Save my name, email, and website in this browser for the next time I comment.

 Notify me of follow-up comments by email.

 Notify me of new posts by email.

Recent News

Biology

Deadly bacteria’s killed by molecular drills

Mogesh – December 14, 2019Deadly bacteria’s killed by molecular drills (molecular training) has gained the ability to fight and destroy deadly bacteria’s that have developed resistance to almost…

Computer games can help predict opioid reuse

December 14, 2019

Is there dark matter in the middle of the Milky Way?

December 14, 2019

Discover the physics of the sun with the Parker solar probe

December 14, 2019

FREE SUBSCRIPTION

Enter your email address to subscribe and receive notifications of new posts by email, You can unsubscribe at anytime.

Email Address

SUBSCRIBE

YOU MAY MISSED

Scientists found important neuron circuits that regulate alcohol consumption

December 13, 2019

Clarification of tiger lines on Saturn’s moon Enceladus

December 10, 2019

CATEGORY

Biology           Technology

Space              Physics

Health             Genetic

Medicine        Environment

© Copyright 2014 – 2019 ScienceCover – All Rights Reserved

2Invest

ENloginDemoLive account

Do you need any assistance?
We are here to help!
Monday – Friday 1:00PM – 10:00PM GMT

Let’s be in touch

Fill in the form to contact usFirst Name *Last Name *Email *SubjectMessageSend

Our contacts

 +55 6140427663
 +54 1159842577
 +56 732451299
 +52 7449800691
 +51 17055876
 +507 8365260
 +506 40001721
 +182 99466205
support@2invest.comLive chat

2INVEST is a brand name of Aronex Corporation Ltd run through the approved domain http://www.2Invest.com. The website is operated by Habonix Solutions Ltd, a company bearing registration number HE396742 and having its registered address at 40 Vyzantiou, 1st floor, flat/office 101, Strovolos, 2064, Nicosia, Cyprus and by Aronex Corporation Ltd, a limited liability company incorporated under the laws of Seychelles, bearing company registration number 8417765-1, regulated by the Seychelles Financial Services Authority (FSA), license number SD014 and having its registered address at Suite C, Orion Mall, Victoria Mahe, Seychelles.

Would you like to reach out to the executives directly?

Contact management

2Invest

The website is operated by

https://2invest-fo-content.s3.amazonaws.com/Content/Images/Pages/habonix.PNG

, a company bearing registration number HE396742 and having its registered address at 

 and by 

, a limited liability company incorporated under the laws of Seychelles, bearing company registration number 8417765-1, regulated by the Seychelles Financial Services Authority (FSA), license number SD014 and having its registered address at Suite C, Orion Mall, Victoria Mahe, Seychelles.

Risk Warning:
CFDs and derivatives are leveraged products and involve high level of risk. Trading on such financial instruments may result in losing all of your invested capital. Past performance of financial instruments does not guarantee future performance. Please ensure that you fully understand the risks involved before making any trading decisions. Our risk disclosure can be found in our terms and conditions.

MIT xPRO Logo

DATA SCIENCE AND BIG DATA ANALYTICS: MAKING DATA-DRIVEN DECISIONS

Turn big data into even bigger results with a seven-week online course from MIT.

Get updates & access a FREE case study from this course

By submitting your information, you are agreeing to receive periodic information about online programs from MIT related to the content of this course.

MAKE BETTER DATA-DRIVEN BUSINESS DECISIONS

Could you be using data more effectively?

90% of the world’s data has been created in just the past few years. Faced with overwhelming amounts of data, organizations are struggling to extract the powerful insights they need to make smarter business decisions. To help uncover the true value of your data, MIT Institute for Data, Systems, and Society (IDSS) created the online course Data Science and Big Data Analytics: Making Data-Driven Decisions for data scientist professionals looking to harness data in new and innovative ways.

Over the course of seven weeks, you will take your data analytics skills to the next level as you learn the theory and practice behind recommendation engines, regressions, network and graphical modeling, anomaly detection, hypothesis testing, machine learning, and big data analytics.

At the end of this course you will receive a digital Professional Certificate and 1.8 Continuing Education Units (CEUs) from MIT.

MIT Faculty explain the impact of big data on business decision making.

AFTER THIS COURSE, YOU WILL BE ABLE TO:

Apply data science techniques to your organization’s data management challenges and business decision making

Determine the difference between graphical models and network models.

Convert datasets to models through predictive analytics.

Deploy machine learning algorithms to improve business decision making.

Master best practices for experiment design and hypothesis testing.

Identify and avoid common pitfalls in big data analytics.

Don’t just discover new strategies, tools, and insights – put them to the test! With a selection of 20 case studies and hands-on projects, this course helps learners apply their newfound knowledge to realistic business challenges.

Although Python is the most frequently used language, R can be used to complete many of the case studies. Both of required case studies in this course can be completed with either Python or R.  View the week-by-week course schedule.

THE MIT XPRO LEARNING EXPERIENCE

WORRIED ABOUT WORK/LIFE BALANCE?

Learn online – when & where you like.

WANT A CONFIRMATION OF SUCCESS?

Earn a Professional Certificate and 1.8  Continuing Education Units (CEUs) from MIT.

NEED A NETWORK?

Connect with an international community of professionals.

LOOKING TO INNOVATE?

Gain insights from leading MIT faculty, industry experts, and business leaders.

READY FOR FEEDBACK?

Benefit from a robust, collaborative learning environment.

INTERESTED IN THE LATEST TRENDS?

Access cutting edge, research-based multimedia content developed by MIT professors & industry experts.

START DATE

Feburary 3, 2020

DURATION

7 weeks

PRICE

$899

ENROLL NOW

WHAT LEARNERS AND COMPANIES ARE SAYING

MORE THAN 6,000 PROFESSIONALS HAVE COMPLETED THIS COURSE.

Previous

Jasmine Latham

DR. JASMINE LATHAM – LEAD DATA SCIENTIST AT ONS DATA SCIENCE CAMPUS

“I am very pleased with the course content, it is exactly the level I am looking for. Each professor/course presenter has packed a lot of information and has explained complex algorithms in good detail. Some with [a] good sense of humor.”

0

ADNAN RAZA – BUSINESS CONSULTING, MACKENZIE INVESTMENTS

“As a novice student in the field of AI and Machine learning, the module based approach really helped me structure the step-wise approach. In addition the real world examples helped associate concepts with applications. I am now more aware and equipped compared to Day 1.”

Anonymous Learner Profile Avatar

SUNIL UPADHYAY – TECH LEAD, IBM

“I had very basic information about data sciences and machine learning when I started this course. This course has definitely helped in getting an overview of different aspects and algorithms for data analysis and processing and to gather insights out of it.”

Murali Thyagarajan

MURALI THYAGARAJAN – DBA & APPLICATION SUPPORT, NASDAQ

“The course was easy to understand and had depth. All the concepts were clearly laid out and explained. This is the best course I have come across on this topic.”

Jasmine Latham

DR. JASMINE LATHAM – LEAD DATA SCIENTIST AT ONS DATA SCIENCE CAMPUS

“I am very pleased with the course content, it is exactly the level I am looking for. Each professor/course presenter has packed a lot of information and has explained complex algorithms in good detail. Some with [a] good sense of humor.”

0

ADNAN RAZA – BUSINESS CONSULTING, MACKENZIE INVESTMENTS

“As a novice student in the field of AI and Machine learning, the module based approach really helped me structure the step-wise approach. In addition the real world examples helped associate concepts with applications. I am now more aware and equipped compared to Day 1.”

Anonymous Learner Profile Avatar

SUNIL UPADHYAY – TECH LEAD, IBM

“I had very basic information about data sciences and machine learning when I started this course. This course has definitely helped in getting an overview of different aspects and algorithms for data analysis and processing and to gather insights out of it.”

Murali Thyagarajan

MURALI THYAGARAJAN – DBA & APPLICATION SUPPORT, NASDAQ

“The course was easy to understand and had depth. All the concepts were clearly laid out and explained. This is the best course I have come across on this topic.”

Jasmine Latham

DR. JASMINE LATHAM – LEAD DATA SCIENTIST AT ONS DATA SCIENCE CAMPUS

“I am very pleased with the course content, it is exactly the level I am looking for. Each professor/course presenter has packed a lot of information and has explained complex algorithms in good detail. Some with [a] good sense of humor.”

0

ADNAN RAZA – BUSINESS CONSULTING, MACKENZIE INVESTMENTS

“As a novice student in the field of AI and Machine learning, the module based approach really helped me structure the step-wise approach. In addition the real world examples helped associate concepts with applications. I am now more aware and equipped compared to Day 1.”

Anonymous Learner Profile Avatar

SUNIL UPADHYAY – TECH LEAD, IBM

“I had very basic information about data sciences and machine learning when I started this course. This course has definitely helped in getting an overview of different aspects and algorithms for data analysis and processing and to gather insights out of it.”Next

  • 1
  • 2

MIT FACULTY TEACHING THIS COURSE

Previous

Kalyan Veeramachaneni

Kalyan Veeramachaneni

Principal Research Scientist at the Laboratory for Information and Decision Systems at MIT

Caroline-Uhler

Caroline Uhler

Associate Professor, Department of Electrical Engineering and Computer Science at MIT and IDSS

Guy-Bresler-120x120_c-1

Guy Bresler

Associate Professor, Department of Electrical Engineering & Computer Science, Laboratory for Information and Decision Systems and IDSS

Devavrat-Shah-Co-Director-120x120_c

Devavrat Shah

Professor, Department of Electrical Engineering & Computer Science; Director, Statistics and Data Sience Center

Philippe-Rigollet

Philippe Rigollet

Associate Professor, Mathematics Department at MIT

Victor-Chernozhukov-1

Victor Chernozhukov

Professor, Department of Economics and the Statistics and Data Science Center at MIT

Stefanie-Jegelka-Finalpng

Stefanie Jegelka

Associate Professor, Department of Electrical Engineering and Computer Science and member of Computer Science and AI Lab and IDSS

Ankur-Moitra

Ankur Moitra

Associate Professor, Department of Mathematics and member of the Computer Science and AI Lab at MIT

Tamara-Broderick

Tamara Broderick

Associate Professor, Department of Electrical Engineering and Computer Science and a member of the Computer Science and AI Lab at MIT

David-Gamarnik-square-1

David Gamarnik

Nanyan Technological University Professor, Sloan School of Management

Jonathan-Kelner

Jonathan Kelner

Associate Professor, Department of Mathematics and a member of the Computer Science and AI Lab at MIT

Kalyan Veeramachaneni

Kalyan Veeramachaneni

Principal Research Scientist at the Laboratory for Information and Decision Systems at MIT

Caroline-Uhler

Caroline Uhler

Associate Professor, Department of Electrical Engineering and Computer Science at MIT and IDSS

Guy-Bresler-120x120_c-1

Guy Bresler

Associate Professor, Department of Electrical Engineering & Computer Science, Laboratory for Information and Decision Systems and IDSS

Devavrat-Shah-Co-Director-120x120_c

Devavrat Shah

Professor, Department of Electrical Engineering & Computer Science; Director, Statistics and Data Sience Center

Philippe-Rigollet

Philippe Rigollet

Associate Professor, Mathematics Department at MIT

Victor-Chernozhukov-1

Victor Chernozhukov

Professor, Department of Economics and the Statistics and Data Science Center at MIT

Stefanie-Jegelka-Finalpng

Stefanie Jegelka

Associate Professor, Department of Electrical Engineering and Computer Science and member of Computer Science and AI Lab and IDSS

Ankur-Moitra

Ankur Moitra

Associate Professor, Department of Mathematics and member of the Computer Science and AI Lab at MIT

Tamara-Broderick

Tamara Broderick

Associate Professor, Department of Electrical Engineering and Computer Science and a member of the Computer Science and AI Lab at MIT

David-Gamarnik-square-1

David Gamarnik

Nanyan Technological University Professor, Sloan School of Management

Jonathan-Kelner

Jonathan Kelner

Associate Professor, Department of Mathematics and a member of the Computer Science and AI Lab at MIT

Kalyan Veeramachaneni

Kalyan Veeramachaneni

Principal Research Scientist at the Laboratory for Information and Decision Systems at MIT

Caroline-Uhler

Caroline Uhler

Associate Professor, Department of Electrical Engineering and Computer Science at MIT and IDSS

Guy-Bresler-120x120_c-1

Guy Bresler

Associate Professor, Department of Electrical Engineering & Computer Science, Laboratory for Information and Decision Systems and IDSSNext

  • 1
  • 2
  • 3
  • 4

WHO SHOULD ENROLL

Professionals at any career stage, looking to turn large volumes of data into actionable insights.


Past learners’ job roles have included: business intelligence analysts, management consultants, technical managers, business managers, data science mangers.


Data science enthusiasts and IT professionals.


Background knowledge of statistical techniques and data calculations or quantitative methods of data research is strongly recommended.


Familiarity with either R or Python is recommended but not required.

Image

JUSTIFY YOUR PROFESSIONAL DEVELOPMENT

Many companies offer professional development benefits to their employees but sometimes starting the conversation is the hardest part of the process.

Use these talking points, stats, and email template to advocate for your professional development through MIT xPRO’s online course, Data Science and Big Data Analytics.

VIEW YOUR GUIDE

Image

READY FOR A SNEAK PEEK?

SAMPLE THE CASE STUDY.

Ever wonder how companies like Netflix, Spotify and Pandora filter products based on their unique user’s preference?

In this case study, you will learn how Netflix utilizes Recommendation Engines to provide the best possible shows and movies for unique users.

To access this case study, submit your information in the form above. 

PROPEL YOUR CAREER ON YOUR TERMS

Technology is accelerating at an unprecedented pace causing disruption across all levels of business. Tomorrow’s leaders must demonstrate technical expertise as well as leadership acumen in order to maintain a technical edge over the competition while driving innovation in an ever-changing environment.

MIT uniquely understands this challenge and how to solve it with decades of experience developing technical professionals. MIT xPRO’s online learning programs leverage vetted content from world-renowned experts to make learning accessible anytime, anywhere. Designed using cutting-edge research in the neuroscience of learning, MIT xPRO programs are application focused, helping professionals build their skills on the job.

Embrace change. Enhance your skill set. Keep learning. MIT xPRO is with you each step of the way.

Have questions about the course?

CONTACT US

MIT xPRO

Massachusetts Institute of Technology

77 Massachusetts Avenue

Cambridge, MA 02139

© 2019 All Rights Reserved. MIT xPRO.

Nanotechnology Now

 

Our NanoNews Digest Sponsors

EMFUTUR Oct 2014
Heifer International

 

Wikipedia Affiliate Button

 

Home > Press > Better studying superconductivity in single-layer graphene| An existing technique is better suited to describing superconductivity in pure, single-layer graphene than current methods

 

Abstract:
Made up of 2D sheets of carbon atoms arranged in honeycomb lattices, graphene has been intensively studied in recent years. As well as the material’s diverse structural properties, physicists have paid particular attention to the intriguing dynamics of the charge carriers its many variants can contain. The mathematical techniques used to study these physical processes have proved useful so far, but they have had limited success in explaining graphene’s ‘critical temperature’ of superconductivity, below which its’ electrical resistance drops to zero. In a new study published in EPJ B, Jacques Tempere and colleagues at the University of Antwerp in Belgium demonstrate that an existing technique is better suited for probing superconductivity in pure, single-layer graphene than previously thought.

Better studying superconductivity in single-layer graphene| An existing technique is better suited to describing superconductivity in pure, single-layer graphene than current methods

Heidelberg, Germany | Posted on December 13th, 2019

The team’s insights could allow physicists to understand more about the widely varied properties of graphene; potentially aiding the development of new technologies. Typically, the approach they used in the study is used to calculate critical temperatures in conventional superconductors. In this case, however, it was more accurate than current techniques in explaining how critical temperatures are suppressed with lower densities of charge carriers, as seen in pure, single-layer graphene. In addition, it proved more effective in modelling the conditions which give rise to interacting pairs of electrons named ‘Cooper pairs’, which strongly influence the electrical properties of the material.

Tempere’s team made their calculations using the ‘dielectric function method’ (DFM), which accounts for the transfer of heat and mass within materials when calculating critical temperatures. Having demonstrated the advantages of the technique, they now suggest that it could prove useful for future studies aiming to boost and probe for superconductivity in single and bilayer graphene. As graphene research continues to be one of the most diverse, fast-paced fields in materials physics, the use of DFM could better equip researchers to utilise it for ever more advanced technological applications.

####

For more information, please click here

Contacts:
Sabine Lehr
sabine.lehr@springernature.com

@SpringerNature

Copyright © SpringerIf you have a comment, please Contact us.

Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.

Delicious
Digg
Newsvine
Google
Yahoo
Reddit
Magnoliacom
Furl
Facebook

Bookmark:
        

Related Links

 RELATED JOURNAL ARTICLE:Sponsored From Around the Web

Drink This Before Bed, Watch Your Body Fat Melt Like Crazy

Drink This Before Bed, Watch Your Body Fat Melt Like Crazy

Brazil Don’t Buy CBD Oil Until You Read This.

Brazil Don’t Buy CBD Oil Until You Read This.

Simple Trick Erases Eye Bags & Lip Lines In Seconds

Simple Trick Erases Eye Bags & Lip Lines In Seconds

Drink This Before Bed, Watch Your Body Melt Fat Like Crazy

Drink This Before Bed, Watch Your Body Melt Fat Like Crazy

10 Animals That Will Go Extinct Before Your Children Grow Up

10 Animals That Will Go Extinct Before Your Children Grow Up

Harvard Study Reveals the Ingredients to True Happiness

Harvard Study Reveals the Ingredients to True Happiness

‘It Just Goes Kablooie’ – Boy Describes First Kiss

‘It Just Goes Kablooie’ – Boy Describes First Kiss

Blacksmith ‘Proves’ 9/11 Conspiracy Theorists Are ‘Moronic’

Blacksmith ‘Proves’ 9/11 Conspiracy Theorists Are ‘Moronic’

6 Creepy Abandoned Amusement Parks We Want to Visit

6 Creepy Abandoned Amusement Parks We Want to Visit

VISIT WONDERLAND: Creepy Abandoned Chinese Disneyland

VISIT WONDERLAND: Creepy Abandoned Chinese Disneyland?

Related News Press

Graphene/ Graphite

 Growing carbon nanotubes with the right twist: Researchers synthetize nanotubes with a specific structure expanding previous theories on carbon nanotube growth December 13th, 2019

 How to induce magnetism in graphene: Elusive molecule predicted in the 1970s finally synthesized December 11th, 2019

News and information

 Growing carbon nanotubes with the right twist: Researchers synthetize nanotubes with a specific structure expanding previous theories on carbon nanotube growth December 13th, 2019

 Tiny magnetic particles enable new material to bend, twist, and grab December 13th, 2019

 New laser technique images quantum world in a trillionth of a second: Technique captures a process that commonly causes electrical resistance in materials while, in others, can cause the absence of resistance, or superconductivity December 13th, 2019

 Silver improves the efficiency of monograin layer solar cells December 12th, 2019

Superconductivity

 New laser technique images quantum world in a trillionth of a second: Technique captures a process that commonly causes electrical resistance in materials while, in others, can cause the absence of resistance, or superconductivity December 13th, 2019

Possible Futures

 Growing carbon nanotubes with the right twist: Researchers synthetize nanotubes with a specific structure expanding previous theories on carbon nanotube growth December 13th, 2019

 Tiny magnetic particles enable new material to bend, twist, and grab December 13th, 2019

 New laser technique images quantum world in a trillionth of a second: Technique captures a process that commonly causes electrical resistance in materials while, in others, can cause the absence of resistance, or superconductivity December 13th, 2019

 Silver improves the efficiency of monograin layer solar cells December 12th, 2019

Discoveries

 Growing carbon nanotubes with the right twist: Researchers synthetize nanotubes with a specific structure expanding previous theories on carbon nanotube growth December 13th, 2019

 New laser technique images quantum world in a trillionth of a second: Technique captures a process that commonly causes electrical resistance in materials while, in others, can cause the absence of resistance, or superconductivity December 13th, 2019

 CEA-Leti and Partners Demo Potentially Scalable Readout System for Large Arrays of Quantum Dots: Results Hold promise for Fast, Accurate Single-Shot Readout ‘Of Foundry-Compatible Si MOS Spin Qubits’ December 12th, 2019

 Silver improves the efficiency of monograin layer solar cells December 12th, 2019

Announcements

 Growing carbon nanotubes with the right twist: Researchers synthetize nanotubes with a specific structure expanding previous theories on carbon nanotube growth December 13th, 2019

 Tiny magnetic particles enable new material to bend, twist, and grab December 13th, 2019

 New laser technique images quantum world in a trillionth of a second: Technique captures a process that commonly causes electrical resistance in materials while, in others, can cause the absence of resistance, or superconductivity December 13th, 2019

 Silver improves the efficiency of monograin layer solar cells December 12th, 2019

Interviews/Book Reviews/Essays/Reports/Podcasts/Journals/White papers/Posters

 Growing carbon nanotubes with the right twist: Researchers synthetize nanotubes with a specific structure expanding previous theories on carbon nanotube growth December 13th, 2019

 New laser technique images quantum world in a trillionth of a second: Technique captures a process that commonly causes electrical resistance in materials while, in others, can cause the absence of resistance, or superconductivity December 13th, 2019

 How to induce magnetism in graphene: Elusive molecule predicted in the 1970s finally synthesized December 11th, 2019

 Self-driving microrobots December 10th, 2019

 
 
 
 The latest news from around the world, FREE
 
 
 
 
Books

 

 
 
  Premium Products
 
 
 Only the news you want to read!
 Learn More
 
 
 
 
 Full-service, expert consulting
 Learn More
 
 
 
EMFUTUR Oct 2014
LEAP

 

Nanotechnology.org.il

 

understandingnano

 

 


Nanotechnology Now Featured Books

 

Lifeboat Bitcoin

 

The Hunger Project

 

 

Foresight Banner 8-3-7 on

 

© Copyright 1999-2016 7th Wave, Inc. All Rights ReservedPRIVACY POLICY :: CONTACT US :: STATS :: SITE MAP :: ADVERTISE

 

Ethics and Negotiation: 5 Principles of Negotiation to Boost Your Bargaining Skills in Business Situations

How to use the principles behind negotiation ethics to create win-win agreements for you and your bargaining counterpart

BY PON STAFF — ON DECEMBER 12TH, 2019 / NEGOTIATION TRAINING

  •  
  •  
  •  
  •  
  •  

 

3 Comments

Negotiation

Knowing the norms of ethics and negotiation can be useful whether you’re negotiating for yourself or on behalf of someone else. Each ethical case you come up against will have its own twists and nuances, but there a few principles that negotiators should keep in mind while at the bargaining table.

By asking yourself the following questions, you can illuminate the boundaries between right and wrong at the negotiation table and in the process discover your own ethical standards:


Build powerful negotiation skills and become a better dealmaker and leader. Download our FREE special report, Negotiation Skills: Negotiation Strategies and Negotiation Techniques to Help You Become a Better Negotiator, from the Program on Negotiation at Harvard Law School.


Principle 1. Reciprocity:

Would I want others to treat me or someone close to me this way?

Principle 2. Publicity:

Would I be comfortable if my actions were fully and fairly described in the newspaper?

Principle 3. Trusted friend:

Would I be comfortable telling my best friend, spouse, or children what I am doing?

Principle 4. Universality:

Would I advise anyone else in my situation to act this way?

Principle 5. Legacy:

Does this action reflect how I want to be known and remembered?

Doing the right thing sometimes means that we must accept a known cost. But in the long run, doing the wrong thing may be even more costly.

See Also: What If We Have the Same Social Motive at the Bargaining Table: When two people share the same motivation, they may fall commit the same mistakes and reinforce each other’s failures. In this article, we evaluate a labor negotiation in which the chief management negotiator withholds information about revenue projections, while the labor leader holds back details about workforce sentiment. With impasse the result, it helps to be aware of when you’re negotiating with a fellow individualist or a fellow cooperator, your goal should be to overcome the inherent flaws of your orientation (to identify your negotiating style – please read “Identifying Your Negotiation Style”).

See Also: Trust in Negotiations – Trust takes time to develop but negotiators rarely have time to build strong relationships with their counterparts so instead a cautious approach is undertaken in order to protect yourself from a bad deal. In this article, the argument for taking risks during a negotiation with a counterpart that you do not know is explored and the benefits and pitfalls of this risk-taking approach are delineated.

See Also: Beware Your Counterpart’s Biases – After a failed negotiation, it’s tempting to construct a story about how the other side’s irrationality led to impasse. Unfortunately, such stories will not resurrect the deal. In the past we have encouraged you to ‘debias’ your own behavior by identifying the assumptions that may be clouding your judgment. We have introduced you to a number of judgment biases – common, systematic errors in thinking that are likely to affect your decisions and harm your outcomes in negotiation.

See Also: Strategies for Negotiating More Rationally – In past articles, we have highlighted a variety of psychological biases that affect negotiators, many of which spring from a reliance on intuition. Of course, negotiators are not always affected by bias; we often think systematically and clearly at the bargaining table. Most negotiators believe they are capable of distinguishing between situations in which they can safely rely on intuition from those that require more careful thought – but often they are wrong. In fact, most of us trust our intuition more than evidence suggests that we should

Which negotiation principle is most important to you? Let us know in the comments..


Build powerful negotiation skills and become a better dealmaker and leader. Download our FREE special report, Negotiation Skills: Negotiation Strategies and Negotiation Techniques to Help You Become a Better Negotiator, from the Program on Negotiation at Harvard Law School.


Adapted from “Ethics and Negotiation” by Michael Wheeler for the March 2004 issue of the Negotiation newsletter.

Related Posts

  •  
  •  
  •  
  •  
  •  

 

3 Comments

Tags: bargaining tableethics and negotiationin negotiationmichael wheelernegotiationnegotiation newsletternegotiation tablenegotiatorsComments

3 Responses to “Ethics and Negotiation: 5 Principles of Negotiation to Boost Your Bargaining Skills in Business Situations”

  • TESFAYE NOVEMBER 21, 2013thank you for encouraging me to be better trainer on negotiationREPLY
  • MICHAEL W. SEPTEMBER 13, 2016A number of interesting ideas here, Michael.You write: “After a failed negotiation, it’s tempting to construct a story about how the other side’s irrationality led to impasse.”I think the obverse is also important. When you succeed, it is equally tempting to construct a story how rational you were.REPLY
  • CLAUDE C. MARCH 1, 2019Thank you very much for your daily blog. It is always interesting. According to Ethics and Negotiation, I try to keep in mind the fact that the world is really small and that at any time I could meet my opposant in negotiation. So, I also keep in mind that I would be able to look at him in his eyes, calmly, sincerely, frankly, even if sometimes we desagreed strongly. I should never have to change my course to avoid someone beacause I am not proud of the way I negociated.REPLY

Leave a Reply

Name (required)

Email (will not be published) (required)

Comment

SPRING 2020 HARVARD NEGOTIATION MASTER CLASS

New Master Class

2020 PROGRAMS

New Programs

SUMMER PROGRAMS

New Summer Programs

Contact us: Call 1-800-391-8629 (outside the US: +1-301-528-2676) between 9 a.m. and 5 p.m. ET any business day or email hni@law.harvard.edu

SELECT YOUR FREE SPECIAL REPORT

TEACHING NEGOTIATION RESOURCE CENTER

STAY CONNECTED TO PON

  •  
  •  
  •  
  •  
  •  

PREPARING FOR NEGOTIATION

Understanding how to arrange the meeting space is a key aspect of preparing for negotiation. In this video, Professor Guhan Subramanian discusses a real world example of how seating arrangements can influence a negotiator’s success. This discussion was held at the 3 day executive education workshop for senior executives at the Program on Negotiation at Harvard Law School.

Guhan Subramanian is the Professor of Law and Business at the Harvard Law School and Professor of Business Law at the Harvard Business School.

ARTICLES & INSIGHTS

EXECUTIVE SEMINARS

HARVARD NEGOTIATION INSTITUTE SEMINARS

PON PUBLICATIONS

RESOURCES

Copyright © 2008–2019 The President and Fellows of Harvard College

  •  
Icon of a person: white decorative
Accessibility menu is on
Big Think

DISCOVERVIDEONEWSLETTERSSHOPJOIN PREMIUMSIGN IN

Subscribe to our weekly newsletter

Man with binoculars and a hat looks forward

How a Wild Theory About Nelson Mandela Proves the Existence of Parallel Universes

How Nelson Mandela, quantum mechanics, and the Internet combined to provide evidence of parallel universes. 

PAUL RATNER27 December, 2017Credit: Gareth Davies/Getty Images and Pixabay.

What does Nelson Mandela, the defiant revolutionary who led the people of South Africa from under apartheid, has to do with alternate realities? The answer is a conspiracy of conspiracies which has, of course, struck a strong chord on the Internet.

On December 5th, 2013, when President Mandela died, a large number of people around the world found themselves thinking they were sure he died much earlier, while in prison in the 1980s. These people found each other online and the Mandela Effect was born. 

What if there are certain events in our collective memories that some people remember one way and others remember completely differently? The Mandela Effect theory says that both groups are actually remembering correctly. The difference is that one group lived in one timeline or reality and the other group experienced a different timeline in their past.

  TOP ARTICLES1/5READ MOREVPNs are a must for private browsing in2020. Here are huge deals on 5 of them

Nelson Mandela leaves the InterContinental Hotel after a photoshoot with celebrity photographer Terry O’Neil on June 26, 2008 in London, England. (Photo by Chris Jackson/Getty Images)

Fiona Broome,author and self-described “paranormal researcher,” who coined the term “the Mandela Effect” described her memories of his death this way:

“See, I thought Nelson Mandela died in prison,” wrote Broome. “I thought I remembered it clearly, complete with news clips of his funeral, the mourning in South Africa, some rioting in cities, and the heartfelt speech by his widow.”closevolume_off

She didn’t necessarily think much of this at the time but in a few years met people who shared the same exact memories. She realized soon that “perhaps thousands” of people have similar “false” memories. They have supported each other online and identified many more collective mis-rememberings.

For her own take on what is happening, Broome invokes quantum mechanics, seeing the collective false memories from a “multiverse” perspective. She and others may be having shared memories from parallel realities.

In 2012, another blogger caused a Mandela Effect splash over the spelling of the titles in the children’s books “The Berenstain Bears”. She remembered it vividly from her childhood to be “The Bernstein Bears” – complete with how the letters on the cover looked. It turned out that many people had this same memory as well.

Certainly, as one looks at the kind of memories people seem to misremember, many of them revolve around cultural memes. Another popular memory involves many folks remembering the logo for the cartoon series “Looney Tunes” being spelled “Looney Toons”.

Celebrity deaths are also a popular shared mis-remembrance. People recall vividly the death of the legendary evangelist Billy Graham. He is, as of the writing of this article, very much still alive, having recently celebrated his 99th birthday.

Another popular memory involves a film by the comic Sinbad. Whole online communities sprang up sharing details of a film he supposedly produced in the 1990s called “Shazaam!” People even remember how the poster looked. The only issue with that – there was no such movie ever made.  

A faked cover for the film that spread online.

In 2017, the Mandela Effect was invoked by people who thought the CERN supercollider created a rip in reality and we are now living in one where Donald Trump is President. How much you believe that may depend on your politics.

Of course, it may also feel like a stretch that these internet phenomena are evidence of alternate timelines. What does science have to say about collective false memories? 

Psychologists describe the disconnect between our memories and realities as a confabulation.  The term describes a disturbance of memory, which can result in the production of fabricated or misinterpreted memories, even despite contradictory evidence. It may not even be intentionally happening and can be related to brain damage.

Another explanation for the Mandela Effect, as proposed by neuroscientist Caitlin Aamodt, may be suggestibility – our tendency to want to believe what others are suggesting to be true. Especially, in the petri dish of the internet, it’s not surprising if supposed instances of the Mandela Effect spread like memes. I certainly wouldn’t be the first to point out how truth of an event or fact is often not revenant to its dissemination online.

Dr. John Paul Garrison, a clinical and forensic psychologist, described this effect in an email interview with Forbes:

“I suspect that some memories are spontaneously created when we read certain Mandela Effect news,” wrote Garrison. “However, once that new memory is in there, it might seem like it has been there forever.” 

For more on the Mandela Effect, check out this video:by TaboolaSponsored LinksYou May LikeTom Selleck And His Partner Are Still TogetherMiss Penny StocksHuman Barbie Takes Off Makeup, Doctors Have No WordsHashtagchatterUm dos melhores smartwatches do mundo é vendido 5 vezes mais barato no BrasilXWatchConheça o trio asiático que obriga o corpo a baixar o açúcar no sangueGc 99Bariátrica em cápsula que seca a gordura e diminui o apetite vira febre em São José Do Rio PretoPhytoPower CapsWearing These Things On Airplane Turned Out A Big MistakeTop Journey Mag

TRENDING

‘The West’ is, in fact, the world’s biggest gated community

The return of the ‘stoned ape’ theory

 

The Sooner You Expose A Baby To A Second Language, The Smarter They’ll Be

 

There’s gold in your brain — we now know where it came from

 

Billionaire warlords: Why the future is medieval

FEATURED VIDEOAcademic freedom: What it is, what it isn’t and why there’s confusionVolume 0% THINK AGAIN PODCASTS

Norman Fischer (poet, zen priest) – the only way out of the catastrophe we’re in

JASON GOTS

“Body, breath, awareness…that’s your life. Every problem you ever have, every joy you ever have, depends on that.” In this week’s episode of Think Again, host Jason Gots talks with acclaimed poet and zen teacher Norman Fischer about the imagination as a tool for living a good life.

VIDEOS

Will robots have rights in the future?

GEAR

VPNs are a must for private browsing in 2020. Here are huge deals on 5 of them

SURPRISING SCIENCE

Study: You can have empathy and still be a psychopath

POLITICS & CURRENT AFFAIRS

Why the singular “They” is Merriam-Webster’s word of the year

POLITICS & CURRENT AFFAIRS

Public health crisis: Facebook ads misinform about HIV prevention drug

VIDEOS

What detoxifies a negative work environment?

POLITICS & CURRENT AFFAIRS

Greta Thunberg, climate change activist, wins Time Person of the Year

SURPRISING SCIENCE

Mars 2020 will hunt for ‘microfossils’, signs of ancient alien life

Scroll down to load more…Ad

Skip to main content

HARVARD Digital Collections

Harvard Library Logo

HARVARDDigital CollectionsENTER A KEYWORD TO SEARCHSEARCH 

WHAT AM I SEARCHING?

Harvard Digital Collections provides free, public access to over 6 million objects digitized from our collections – from ancient art to modern manuscripts and audio visual materials.

More about Digital CollectionsMath thesis no. 105

ITEM HIGHLIGHTS

 Septentrionalium regionum descrip1572Harvard Map Collection Receuil des principaux1568Houghton Library Woman arranging flowers…1900-1940Fine Arts Library Case 25s Harvard Project…1950-1953Fung Library Verification of the Stigmata…1288-1279Biblioteca Berenson The French Chef…1963Schlesinger Library on the History of Women in AmericaSEE ALL ITEMS


RELATED SERVICES & TOOLS

TOOLHOLLIS for Archival DiscoveryCatalog for exploring collection guides, finding aids and inventories to locate unique materials in Harvard’s special collections and archives.TOOLDASHA central catalog and open-access repository of research by members of the Harvard community.SERVICEReproductionsNeed a copy? Request reproductions of library materials for research or publication.SEE ALL SERVICES, TOOLS & CATALOGS HOW TOUse Harvard Library’s Special Collections and ArchivesOpen to all, these unique materials can take you to places you never expected. HOW TOGet Teaching Support for Your CoursesWe’re here to help connect your students to the resources they need to meet their course goals. HOW TOGet Research HelpWe’re here to answer your questions and help connect you to the resources you need to get your work done. HOW TOBorrow, Renew and Return Library MaterialsConnect with the library materials you need to get your work done.


COLLECTION HIGHLIGHTS

 DIGITAL COLLECTIONImmigration to the United States, 1789-1930 DIGITAL COLLECTIONIslamic Heritage Project DIGITAL COLLECTIONWomen Working, 1800-1930SEE ALL COLLECTIONS

Harvard Library Logo
  •  
  •  
Creative Commons license

Creative Commons Attribution 4.0 International License

Except where otherwise noted, this work is subject to a Creative Commons Attribution 4.0 International License, which allows anyone to share and adapt our material as long as proper attribution is given. For details and exceptions, see the Harvard Library Copyright Policy ©2018 Presidents and Fellows of Harvard College.

Twitter
Linked In
Facebook
Instagram

 Follow us Subscribe SearchMember log in

EACR - European Association for Cancer Research Logo

Welcome to the European Association for Cancer Research
Europe’s membership association for cancer researchers

EACR Conference Series

EACR-OECI Joint Conference: Molecular Pathology Approach to Cancer

Lisbon, Portugal : 18 – 20 May 2020

See also:

Other meetings in the EACR Conference Series

Introduction

Molecular pathology is revolutionising clinical practice in oncology and pathology paving the way for precision medicine, and has evolved into a growing research field. Knowledgeable molecular pathologists are a significant bottle neck for advancing cancer research and patient care.

This meeting will provide participants with a broad view of the scope, methodologies, future directions and challenges in addition to practical approaches for molecular pathology in research and in clinical settings. The potential uses of deep learning in pathology will be discussed. The meeting will help participants to establish a network of interactions and to build bridges to foster cross disciplinary studies. We are preparing for the future and for the unknown discoveries still to come.

Target audience

This conference will be of interest to a diverse audience including pathologists, molecular pathologists and pathology residents, researchers in the field of molecular diagnostics and precision oncologists. The conference population will thus simulate the multidisciplinary teams that act in the real world to facilitate interdisciplinary research and the multidisciplinary teams caring for patients with cancer.

Topics to be covered

  • Current views on molecular pathology
  • Molecular pathology of colorectal cancer
  • Cancer models for biomarker discovery, from animal models to patient derived organoids
  • New technologies in diagnostic onco-pathology.

Scientific Programme Committee: 

  • Leonor David
  • Ragnhild A. Lothe
  • Eli Pikarsky
  • Luigi M. Terracciano
  • Giorgio Stanta

Follow us on twitter for updates @EACRnews and engage with the conference with #MolPath20


Accreditation by the European Accreditation Council for Continuing Medical Education (EACCME®) is being applied for.


Organisers

Key dates

Bursary application deadline:
13 March 2020

Abstract submission deadline:
13 March 2020

Registration deadline:
17 April 2020

100% of participants from our 2018 Molecular Pathology meeting would recommend it to others

“Great course and wonderful speakers, interesting and well organized!”

“It provides a comprehensive overview of the most important molecular patholgy findings per tumor type.”

Feedback from 2018 participants

EACR is a registered charity in England and Wales (1171827) and a company limited by guarantee (07682372). Registered address: Sir Colin Campbell Building, University of Nottingham Innovation Park, Triumph Road, Nottingham, NG72TU, UK.© 2007 – 2019 EACR – European Association for Cancer Research. All rights reserved.
Membership website design by Peter Bourne Communications

Upgrade

Rodrigo Cal

 

How Quantum Computers Will Break Your Phone’s Encryption

Mark DoddsFollowJun 30 · 11 min read

These days all your devices are encrypted. Apple boasts about their secure systems, most companies have encrypted servers, even your own laptop probably has some level on encryption going on. However, all of these systems are subject to fault. Of course, this all depends on the type of encryption you’re using. While many agree that the standard is and should be AES, there are a lot of other encryption protocols out there, some of which are still in use today. One of the other main algorithms still circulating the globe is the RSA algorithm. In order to understand why, we have to go back to before its time, to the beginnings of modern cryptography.

In the beginning, there were two…

Martin Hellman, https://ee.stanford.edu/~hellman/

Known mostly within the American cryptologist worlds, Martin Hellman was the more academic of the two. After having received a Ph.D. from Stanford University in 1969, he joined the teaching staff in the electrical engineering department, where he still teaches today.

The second person was Bailey Whitfield Diffie, who could be considered more of your stereotypical revolutionist. After he had attained a Bachelor of a Science from M.I.T in 1965, Diffie worked as a researcher in Boston and Stanford. After 8 years, and a growing interest in cryptography and computer security, he decided to pursue his research full time with the help of his future wife, Mary Fischer.

Bailey Diffie, https://alchetron.com/Whitfield-Diffie

In the summer of 1974, Diffie and Fischer met with a friend at the Thomas J. Watson Research Centre in New York, which housed one of the only non-governmental cryptographic research groups in the United States at the time. While the director couldn’t say much due to secrecy orders, he advised Diffie to meet with Hellman, who was teaching at Stanford while also pursuing a cryptographic research team. A planned half-hour meeting between the two turned into many hours of shared ideas and information. In order to continue their blossoming partnership, Hellman hired Diffie under the university as a grant-funded part-time research programmer for the 1975 spring term. For the next year, the pair attended multiple difference conferences on cryptology and even criticized the NBS proposed Data Encryption Standard for potential security risks (including a key length of only 56-bits). Then in 1976, the two first proposed the idea of an asymmetric public-private key cryptosystem. Their system used a shared-secret-key created from exponentiation of some number, modulo a prime number.

Then there were three more…

Despite leaving the problem of realizing a one-way function open, Diffie and Hellman laid the groundwork which would allow others to go down in computer science history.

Two researchers at M.I.T first caught wind of the proposal of a public-private key system and set to the task of discovering the missing ingredient. Ron Rivest and Adi Shamir, along with fellow cryptographer Leonard Adleman, made several attempts over the course of a year to create the proposed “one-way function” required to implement the algorithm. Rivest and Shamir, as the computer scientists, proposed many potential functions, while Adleman was responsible for finding their weakness, as he was the mathematician of the group. They tried many approaches including “knapsack-based” and “permutation polynomials”. For a time, they even thought that what was need would be impossible to obtain due to the conflicting restraints.

L2R: Ron Rivest, Adi Shamir, Len Adleman (2003). Image from https://www.usc.edu

Then in April 1977, the trio was spending the night at a students house when Rivest was unable to sleep, and so proceeded to lay on the couch with a math textbook and again begin pondering their one-way function. After an apparent eureka moment, he spent the rest of the night formalizing his idea and supposedly had much of the paper ready by daybreak. When the trio published the paper of the discovery in August of 1977, with the name RSA for the initials of their last names, they tried to patent it throughout the globe. However, since the paper had already been published at the time of application, the United States was the only country to grant the patent for a “Cryptographic communications system and method” in 1983. 17 years later (when the patent expired), the algorithm was officially released to the public domain by RSA Security on September 6th, 2000.


The RSA Algorithm

The RSA (Rivest–Shamir–Adleman) is an algorithm used by modern computers to encrypt and decrypt messages. It is an asymmetric cryptographic algorithm, meaning that there are two different keys that are being used. It also goes by the name ‘public key cryptography’ because one of the keys can be given to anyone, without compromising the system. The other key, a large composite number, is kept private in order to decode the information. The algorithm is based on the fact that finding the factors of a large composite number is difficult: when the integers are prime numbers, the problem is called prime factorization. More specifically, the process for creating your public and private keys goes like this:

1. Choose 2 different large prime numbers p and q (numbers can be verified as prime by primality tests)2. Calculate n= pq | n is used as the modulus for the public and private keys. It's length, usually expressed in bits, is the key length.3. Calculate ɸ(n) = (p - 1)(q - 1) | where ɸ(n) is Euler's Totient function, and since n = pq where p and q are prime, we get this relationship.4. Choose an integer e such that 1 < e < ɸ(n), and e is coprime to ɸ(n) | For two numbers to be co-prime, they must share no factors other than 1.5. Compute d to satisfy the congruence relation de ≡ 1 (mod ɸ(n)) | In order to solve this relationship, you can use the Extended Euclidean Alogorithm.

After having completed these steps, your system would then send out your public key to the world, which would look something like (n=?, e=?) . Using this key, systems wanting to send you an encrypted message would use that information like this:

To send a message M, you would change M into a number m < n using a padding scheme. Then you would compute the ciphertext (encrypted text) c such that c = mᵉ (mod n), i.e the remainder of mᵉ/n and send it to the recipient. This can be done efficiently on computers using exponentiation by squaring. Note that in these steps, n and e are the values of the public key which was shared by the intended recipient.

Now that your message has been encrypted, it can be sent freely around the web, since only the jumbled up ciphertext is actually been transmitted. This is where the one-way function Hellman and Diffie described comes into play. Using the relationship m = cᵈ (mod n), computers can use the Chinese Remainder Theorem to calculate m and obtain the original message, which is possible to do under the exponent d , the private key that only the recipient knows. Here is an example of the RSA algorithm in action:

1. Choose two random prime numbers; p = 61, q = 53.2. Compute n = pq = 61 * 53 = 32333. Compute ɸ(n) = (61 - 1)(53 - 1) = 31204. Choose e > 1 coprime to 3120; e = 17 | Arbitrarily chosen5. Choose d to satisfy de ≡ 1 (mod ɸ(n)); d = 2753 | Solve using the Chinese remainder theoremThe public key is (n = 3233, e = 17). The private key is (n = 3233, d = 2753)To encrpyt m = 123, we calculate c = 123¹⁷ mod 3233 = 855
To decrypt c = 855, we calculate m = 855²⁷⁵³ mod 3233 = 123Note that while I skipped showing the calculations for brevity, both of these calculations can be computed efficiently using the square-and-multiply algorithm for modular exponentiation.

For those of you who are more mathematically inclined, a full proof is not that hard to build. The namesakes of the RSA algorithm provided a very nice proof along with their proposal of the algorithm, which shows that (mᵉ)ᵈ ≡ m (mod pq) . The original proof proposed used Fermat’s little theorem, and while that approach is still valid, many proofs rely on Euler’s theorem instead. There is a proof of the correctness of the RSA algorithm here, which used a combination of modular arithmetic and Fermat’s little theorem to prove the algorithm is always correct, similar to how it was done in the original paper.


What does it all mean

Despite being conceived almost half a century ago, the RSA algorithm is still used in a lot of modern technology. First and foremost, it is used in a lot of hybrid encryption schemes. Often times files get very large, and it would be inconceivable to encrypt the whole file using this kind of system. Instead, the file would be encrypted symmetrically (similar to password protection), and then the key would be encrypted using the RSA algorithm, and both would be sent to the recipient. This allows for much larger files to be securely transmitted over the internet with very marginal encryption and decryption times.

Another crucial place the algorithm shows up is for digital signatures (like that of an SSL certificate). Using the private key, a computer (or person) can sign a message (or file) into an encrypted state. Once the file is sent to the intended recipient, they can use the public key to authenticate the file and verify that it has not been altered or damaged. Since the public key is used to authenticate the file (or message), the verification can theoretically be done by anyone, since the public key is not sensitive information. This is the basis of how digital authentication works, specifically SSL. An authorized provider will sign a certificate with their private key. This certificate is provided to the web client when a website loads, along with the corresponding public key. The client next verifies the certificate using the public key, which authenticates the certificate, the identity of the provider and allows the server that sent the certificate to open a secure connection to the client.


Cracking the code

The main problems that come with an encryption algorithm are reliability and robustness. An algorithm can be the most beautifully complicated thing in the world, but if it can be broken in a short enough timeframe, it is effectively useless. The main technique employed to break the RSA algorithm is one of brute-force. Simply stated, it involves trying all the possible combinations of a private key until you find the right value. While this is trivial for smaller values, when you think about how large prime numbers can get, it becomes a mammoth task. In order to break the RSA algorithm, you would need to find the prime factorization of n, which is provided in the public key. It has become known as the RSA Factoring Challenge. The challenge was put forward almost 30 years ago and has been increasing in difficulty each time a level is completed. To date, the largest key ever to be correctly factored into prime numbers was a 768-bit key, which corresponds to a 232-digit semiprime, by a team of researchers lead by Thorsten Kleinjung. They used the equivalent of almost 2000 years of computing on a single core 2.2 GHz AMD Opteron. Nowadays keys in use in modern systems are at a minimum of 1024 bits, most going up to 2048 bits, so your texts are probably safe. But that could change very soon.

Quantum computing

So what does this have to do with quantum computers? Well, the rise of quantum computing has been worrying to some people, and rightly so. Although the handful of quantum computers that existing today are not very powerful (compared to what they could be), this is mainly because of the practicalities of building and running them. And so, it did not stop mathematicians from coming up with algorithms that could theoretically work on powerful enough (and stable enough) quantum computers, to break modern encryption schemes.

One algorithm, in particular, poses a very real threat to the RSA algorithm. Shor’s algorithm, named after American mathematician Peter Shor, in fact effectively cracks the RSA scheme. Informally, it solves the following problem: Given an integer N, find its prime factors. There are algorithms that run on today’s computers capable of solving this problem, but the fastest one (general number field sieve), only works in sub-exponential time. Shor’s algorithm theoretically works in polynomial time, which is a massive improvement on the former. In layman’s terms, the fastest computational algorithm will increase in speed according to an exponential function, so a larger number to compute results in an exponentially larger runtime. Under Shor’s algorithm, the runtime increases according to a polynomial function, which grows a lot slower when compared to an exponential function. One estimate for the theoretical runtime of the algorithm is 72(log N)³, which when compared to even a very small exponential function, still increases at a much slower rate. Even with a value of N as small as 1000, you can see that Shor’s algorithm would run in a much shorter time than the regular computational algorithm.

72(log N)³ [red] vs. 1.01ᴺ [blue]

The natural question then becomes, are there any algorithms that are not perceptible to these types of schemes? Well, in theory, yes. One method exists, aptly called the One Time Pad, though it is not very practical. The OTP is information-theoretically secure, which mean that an adversaries’ computational abilities are inapplicable when it comes to finding the message. However, OTP uses a pre-shared key in order to decrypt and encrypt the information. So the information that is being sent is secure, but the problem is how to efficiently share the key between the sender and the recipient. Using OTP’s would require a meeting with the other side directly, or alternatively using a trusted courier, but if this was available you could just send the message using this channel, making the encryption fairly irrelevant.


Epilogue

So, are we destined to live our lives completely in the public eye? Will Big Brother come to pass? Probably not. There are multiple alternatives that while not imperceptible to these types of attacks, are certainly a lot stronger (AES is one such example). There is also the problem that quantum computers are quite a way off being able to reliably implement Shor’s algorithm. Not only are there limits to how powerful we can currently make quantum computers, but there are also multiple quantum phenomena that impact calculations and make the results unreliable (like quantum noise and other quantum-decoherence phenomena). Another thing to consider is that with the breaking of old algorithms comes the creation of new ones. There is still a lot of unknowns about quantum computing, and it would not be amiss to think that new ideas could be discovered that would make things like the RSA algorithm obsolete.

For now, at least, all your information is safe. You can go ahead and send a text message, knowing that only yourself and the recipient will be able to read the message. All thanks to Diffie and Hellman who started the journey, and Rivest, Shamir, and Adleman, who were able to see it through.Cantor’s Paradise

Medium’s #1 Math Publication!

Follow

483

Thanks to Jørgen Veisdal. 

483 claps

 

WRITTEN BY

Mark Dodds

Follow

Typical Web Ninja | Unapologetic Food Enthusiast | University undergraduate and cheese aficionado

Cantor’s Paradise

Cantor’s Paradise

Follow

Medium’s #1 Math Publication!

See responses (1)Discover MediumWelcome to a place where words matter. On Medium, smart voices and original ideas take center stage – with no ads in sight. WatchMake Medium yoursFollow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. ExploreBecome a memberGet unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. UpgradeAboutHelpLegal

Skip to content

New World : Artificial Intelligence

New World : Artificial Intelligence

Artificial Intelligence, Deep Learning, Machine Learning, Brain, Brain Diseases, AI Lectures, AI Conferences, AI TED Talks, Mind and Brain, AI Movies, AI Books in English and TurkishThe World’s Leading Website on Artificial Intelligence

Artificial Intelligence NewsDecember 11, 2019

Reinforcement Learning

 0 CommentReinforcement Learning

This year, we have seen all the hype around AI Deep Learning. With recent innovations, deep learning demonstrated its usefulness in performing tasks such as image recognition, voice recognition, price forecasting, across many industries.

It’s easy to overestimate deep learning’s capabilities and pretend it’s the magic bullet that will allow AI to obtain General Intelligence. In truth, we are still far away from that. However, deep learning has a relatively unknown partner: Reinforcement Learning. As AI researchers venture into the areas of Meta-Learning, attempting to give AI learning capabilities, in conjunction with deep learning, reinforcement learning will play a crucial role.

What is Reinforcement Learning?

Imagine a child who is learning by interacting with their environment. Each touch will generate a sensation that can result in a reward. For instance, the pleasant smell of the flower will entice the child to want to smell the flower again; the pain from a prick of the flower’s stem will alert the child who will refrain from touching the stem again.

In each case, as the child interacts with the environment, the environment reciprocates and teaches the child by rewarding the child with different sensations.

The child is learning by trial and error.

This is reinforcement learning. In reinforcement learning, an agent starts in a neutral state. Then, as actions are taken, the environment helps the agent transition from the neutral state to other states. In these other states, there might be rewards for the agent.

The goal of the agent is to gather as many rewards as possible.

You can visualize yourself as an agent, walking on a reinforcement learning path, starting at the beginning of a maze toward the exit. With each step that you take, you have a chance of collecting rewards that you can tally up. Depending on the type of rewards and the quantity of the rewards, in your reward pouch, decisions can be made to direct you toward the exit. Eventually, with many tries, an optimal path can be found through the maze.

…………………………………………….

Jun Wu

Jun Wu

Read more on Forbes

  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  

 

Post navigation

Machine Learning Interviews – Chip HuyenBeijing Research Institute Teaches Kids About Artificial Intelligence

You May Also Like

 

The Long-term of Artificial Intelligence & Temporal-Difference Learning

 October 3, 20190

Deep Learning for Self-Driving Cars : Lecture 2

 January 25, 20170

Leave a Reply

Your email address will not be published. Required fields are marked *

Comment 

Name * 

Email * 

Website 

New World: “Artificial Intelligence” on Social Media

Visit Us On Pinterest
Visit Us On Facebook
Visit Us On Twitter
Visit Us On GooglePlus
Visit Us On Youtube
Visit Us On Instagram

Most Popular Posts

Would you MARRY a Robot?

Would you MARRY a Robot?

 November 18, 201958

Would You Have A Romantic Relationship With A Robot?

Would You Have A Romantic Relationship With A Robot?

 August 15, 201745

Should I have let my daughter marry our robot?

Should I have let my daughter marry our robot?

 July 25, 201933

A.I. Apocalypse: More Myth Than Reality

A.I. Apocalypse: More Myth Than Reality

 September 2, 201932

Over Next Three Years, Employees will Need Reskilling as AI Takes Jobs

Over Next Three Years, Employees will Need Reskilling as AI Takes Jobs

 September 13, 201928

Why is Artificial Intelligence Female?

Why is Artificial Intelligence Female?

 May 15, 201927

The Coming Singularity: Ray Kurzweil

The Coming Singularity: Ray Kurzweil

 November 23, 201918

Most Doctors don’t Believe Artificial Intelligence can Replace Human Doctors

Most Doctors don’t Believe Artificial Intelligence can Replace Human Doctors

 August 24, 201617

While Sugar Impairs Memory and Learning Skills, Eating Chocolate Improves Brain Function

While Sugar Impairs Memory and Learning Skills, Eating Chocolate Improves Brain Function

 September 7, 201915

Stress and Brain: Jaime Tartar

Stress and Brain: Jaime Tartar

 December 9, 201914

Should We Grant AI Moral and Legal Personhood?

Should We Grant AI Moral and Legal Personhood?

 September 24, 201613

A.I. Artificial Intelligence

A.I. Artificial Intelligence

 February 2, 201913

Water and Brain Function

Water and Brain Function

 December 3, 20199

The Singularity with Michio Kaku and Ray Kurzweil

The Singularity with Michio Kaku and Ray Kurzweil

 December 10, 20199

Artificial Intelligence Movies

Chappie

Chappie

 November 29, 20194

If you have watched “Transcendence” before, when you watch “Chappie”, you will notice that both movies use the same concept. Visual effects are great

  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  

Read More 

Top 22 Best Artificial Intelligence and Robotics Movies of All Time

Top 22 Best Artificial Intelligence and Robotics Movies of All Time

 November 22, 20190

Morgan

Morgan

 September 24, 20190

WALL-E

WALL-E

 May 5, 20190

Uncanny

Uncanny

 April 5, 20190

A.I. Artificial Intelligence

A.I. Artificial Intelligence

 February 2, 201913

The Machine

The Machine

 January 1, 20190

Her

Her

 December 1, 20180

Annihilation

Annihilation

 November 7, 20180

Singularity

Singularity

 October 7, 20180

Passengers

Passengers

 September 3, 20180

Bicentennial Man

Bicentennial Man

 August 1, 20180

The Matrix

The Matrix

 July 1, 20180

Ghost in the Shell

Ghost in the Shell

 June 1, 20180

Blade Runner (1982)

Blade Runner (1982)

 May 30, 20180

Artificial Intelligence Books

Superintelligence

Superintelligence

 December 12, 20191

Superintelligence: Paths, Dangers, Strategies by Nick Bostrom Superintelligence asks the questions: What happens when machines surpass humans in general intelligence? Will artificial agents save

  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  

Read More 

Deep Learning

Deep Learning

 December 11, 20190

Top 7 Books in Artificial Intelligence & Machine Learning

Top 7 Books in Artificial Intelligence & Machine Learning

 November 27, 20192

The Singularity Is Near

The Singularity Is Near

 November 24, 20190

Top 22 Best Artificial Intelligence and Machine Learning Books of All Time

Top 22 Best Artificial Intelligence and Machine Learning Books of All Time

 November 22, 20190

Garry Kasparov’s Deep Thinking

Garry Kasparov’s Deep Thinking

 October 13, 20190

Artificial Intelligence- A Modern Approach

Artificial Intelligence- A Modern Approach

 October 3, 20190

What to Do When Machines Do Everything

What to Do When Machines Do Everything

 October 1, 20190

Artificial intelligence equals human medical diagnostic specialists

Artificial intelligence equals human medical diagnostic specialists

 September 26, 20190

Gödel, Escher, Bach

Gödel, Escher, Bach

 May 1, 20190

Surviving AI

Surviving AI

 April 5, 20190

Our Final Invention

Our Final Invention

 February 2, 20190

Humans Need Not Apply: A Guide to Wealth and Work in the Age of Artificial Intelligence

Humans Need Not Apply: A Guide to Wealth and Work in the Age of Artificial Intelligence

 January 1, 20192

Machine Learning with TensorFlow

Machine Learning with TensorFlow

 December 13, 20180

Beyond Genuine Stupidity : Ensuring AI Serves Humanity

Beyond Genuine Stupidity : Ensuring AI Serves Humanity

 November 5, 20180Copyright © 2019 New World : Artificial Intelligence. Theme: ColorNews by ThemeGrill. Powered by WordPress.

Browse all

view all our sites  

  •  
  •  
  •  
  •  
  •  
physics world
  •  
  •  
  •  
  •  
  •  
  •  

Study maps abundance of plastic debris across European and Asian rivers

11 Dec 2019 Simon Davies

Rivers in southeast Asia transport more plastic to the ocean than some rivers in Europe, evidence from a new study in Environmental Research Letters suggests.

In the first study of its kind, researchers from the Netherlands examined the amounts of floating plastic debris at 24 locations on rivers in seven European and Asian countries.

Lead author Caroline van Calcar, from Delft University of Technology, said: “Land-based plastics, washed into the ocean by rivers, are believed to be the main source of marine plastic litter. It is a global issue on which urgent action is needed.

“However, to date we had very little hard evidence of the amount of plastic rivers contain. Our aim was to gather this evidence in a consistent way, to allow for a direct comparison of rivers in different parts of the world.

“Therefore, as well as several main European rivers (in The Netherlands, France, Italy), we looked at river basins with high amounts of mismanaged plastic waste (in Thailand, Indonesia, Malaysia, and Vietnam). We found that the average scale, composition, and distribution of plastic debris transport over the riverwidth varied considerably for each river. Our results show that local in-situ data is essential to understanding the origin and fate of river plastic debris, and to optimise prevention and collection strategies.”

The rivers the research team studied were: The Rhine (The Netherlands); the Seine (France); the Rhône (France); the Tiber (Italy); the Saigon (Vietnam); the Mekong (Vietnam); the Chao Phraya (Thailand); the Pahang (Malaysia), the Klang (Malaysia); the Kuantan (Malaysia); the Ciliwung (Indonesia); the Pesanggrahan (Indonesia); and the Kanal Banjir Timur (Indonesia).

They used visual counting and debris sampling to assess the scale of plastic transport, its distribution across the river width, and the plastic polymer composition.

Co-author Dr Tim van Emmerik, from Wageningen University, the Netherlands, said: “Several waterways in Indonesia and Vietnam contained up to four times more plastic than waterways in Italy, France, and The Netherlands in terms of plastic items per hour.

“According to recent model estimates, the top 10 to 20 polluting rivers are mostly located in Asia and account for 67-95 percent of the global total. For the first time, our results provide observational evidence that, for the sampled rivers, southeast Asian rivers transport considerably more plastics towards the ocean.

““The origin and fate of riverine plastics is complex. Influencing factors can include the type of waste management, location of cities, dams, and litter traps, seasonality of rainfall and river discharge, and flood events.

Ms van Calcar concluded: “It is also important to note that the composition of plastic debris can vary widely for one polymer type between different rivers in the same country. The composition varies also between locations in one river. This may be due to differences in plastic consumption and management practices, as well as transport mechanisms, and other factors. The plastic polymer composition can provide information on the type of product that was littered. Therefore, determining the type of plastic can lead to the source of the plastic, and hence to improvement of waste management and regulation.”

Most Recent

READ NEXT‘Daring multi-level club solution’ could offer key to combating climate change

PUBLICATIONS

RESEARCHERS

LIBRARIANS

PARTNERS

OUR COMPANY

LEGAL

SOCIAL

  •  
  •  
  •  
  •  
  •  

BACK TO TOP

logo
main article image

(dani3315/iStock)PHYSICS

Heat Energy Has Leapt Across an Empty Vacuum Thanks to a Weird Quantum Effect

DAVID NIELD14 DEC 2019

Quantum physics has up-ended classical physics again, this time enabling heat to transfer across empty space without any of the atoms or molecules that would usually be needed for such a push.

The research taps into a particular bit of quantum weirdness known as the Casimir effect: the idea that empty space isn’t really empty, but filled with tiny electromagnetic fluctuations that can interfere with the objects around them.

Scientists have previously demonstrated how the Casimir effect can move nanoparticles in a vacuum, and push two objects closer together; this latest study demonstrates how it can work with heat transfer, too.

quantum heat 2

(Zhang Lab/UC Berkeley)

This discovery could influence the way that nanoscale electronic components and even quantum computers are designed, managing heat across the smallest scales as our devices shrink down.

“Heat is usually conducted in a solid through the vibrations of atoms or molecules, or so-called phonons – but in a vacuum, there is no physical medium,” says mechanical engineer Xiang Zhang from the University of California, Berkeley. “So, for many years, textbooks told us that phonons cannot travel through a vacuum.

“What we discovered, surprisingly, is that phonons can indeed be transferred across a vacuum by invisible quantum fluctuations.”

The point was proven by two gold-coated silicon nitride membranes placed a few hundred nanometres apart inside a vacuum chamber. Even with complete nothingness between the membranes, and negligible light energy, heating up one membrane caused the other to warm up too.

At larger scales this wouldn’t happen – it’s why the pocket of vacuum between the two walls of a thermos keeps your coffee warm, because the heat can’t easily cross the gap – but at the tiniest of scales the implications could be profound.

Everything about the experiment had to be carefully configured and controlled: from precisely controlling the temperature of the membranes, to keeping the lab chamber completely free of dust.

quantum heat 1

(Violet Carter/UC Berkeley)

Although the distance that the heat travelled is very small, relatively speaking, it was far enough to rule out other causes for the transfer of heat, such as energy from electromagnetic radiation (which is how the Sun warms Earth through the vacuum of space).

And the scientists behind the study think that there could be more to come – if heat can travel through empty space then perhaps sound can, too. After all, they both rely on molecular vibrations to get around.

That will have to wait for another experiment. For now, the team is looking at ways this special quantum effect could be used to manage thermal flow in the computers and electronics of the future.

“This discovery of a new mechanism of heat transfer opens up unprecedented opportunities for thermal management at the nanoscale, which is important for high-speed computation and data storage,” says mechanical engineer Hao-Kun Li from Stanford University.

“Now, we can engineer the quantum vacuum to extract heat in integrated circuits.”

The research has been published in Nature.

Learn More

  1. Delaminating Quadrature Method for Physical Optics Integrals over High-Order Triangular Mesh for Complex Source Beam ApplicationsMin Gao et al., Electromagnetics, 2017
  2. Building performance in dam-break flow – an experimental studyLu Liu et al., Urban Water Journal, 2018
  1. Response of vegetation dynamics to climatic variables across a precipitation gradient in the Northeast China TransectLexin Zhang et al., Hydrological Sciences Journal, 2017
  2. An investigational analysis of W-shaped engineering hybrid fiber composites utilizing steel and boron carbideM. A. Faruqi et al., Mechanics of Advanced Materials and Structures, 2017

Powered byMost Expensive Dogs in the World, RankedWork + MoneyO jogo mais viciante do ano!Forge of Empires – Jogo Online GrátisSponsored LinksErva chinesa que reduz açúcar no sangue invade São José Do Rio PretoGc 99Maggie Smith is 90 & This is Where She Lives NowTrading BlvdGenius Japanese Invention Allows You To Instantly Speak 43 LanguagesInstant Voice TranslatorSponsored LinksComo os personagens da Disney se pareceriam na vida real24/7 Mirror25 Most Spoken Languages in the WorldFar & WideSponsored LinksO melhor smartwatch do mundo é vendido 5 vezes mais barato no BrasilXWatch15 Forbidden Destinations You Can Never VisitFar & WideCan You Guess the Best-Selling Musician of All Time?Work + MoneySponsored LinksBariátrica natural? Conheça a pílula que “amarra” o estômago e diminui o apetite em poucas semanas.PhytoPower CapsIs she the most beautiful woman in the world ?EasyvoyageSponsored LinksPhysicists Claim They’ve Found Even More Evidence of a New Force of NatureScienceAlertThe Latest Breakthrough in Time Crystals Is a Structure That Needs No External InputScienceAlertWe Discovered Toilet Sloths And Found HellScienceAlert

Skip to main content

This website is in betaFind out moreVisit original site ExploreIOP | Institute of PhysicsBeta

Main navigation

Log in

100 incredible years of physics and the IOP

2020 marks the 100th anniversary of the IOP. As part of our celebrations we invited six members of the IOP to give their personal view of their discipline across the last hundred years; the physicists who inspired them and the discoveries and innovations that have helped shape our lives.

Over the next year, we will be adding to this online resource and we are asking members to suggest future interviewees to be considered for inclusion. Please email your suggestions to anniversary@iop.org.


100 incredible years of physics
Astronomy and space image

100 years of astrophysics

Featuring Dame Jocelyn Bell Burnell, University of OxfordRead more

Materials image

100 years of materials science

Featuring Dr Clara Barker, University of OxfordRead more

Medical physics image

100 years of medical physics

Featuring Dr Dimitra Darambara, Institute of Cancer ResearchRead more

Illustration of high energy particles flowing through a tokamak or doughnut-shaped device

100 years of nuclear physics

Featuring Professor Jim Al-Khalili, University of SurreyRead more

Particle and nuclear image

100 years of particle physics

Featuring Professor Val Gibson, University of CambridgeRead more

Three circles within each other, with different coloured outlines, overlapping with another set of three circles within each other, again with different coloured outlines, against a black background.

100 years of quantum physics

Featuring Professor Kai Bongs, University of BirminghamRead more

Illustration of futuristic quantum processor

The next 100 years

How might physics impact future generations? In this centenary year we ask: what could the future hold for physics and society?Read more

  •  
  •  
  •  
  •  

© 2019 IOP All rights reserved.
The Institute is a charity registered in England and Wales (no. 293851) and Scotland (no. SC040092)
Homepage and IOPConnect image © Simulating eXtreme Spacetimes (SXS) ProjectWebsite by

We use cookies on this site to enhance your user experience

By clicking any link on this page you are giving your consent for us to set cookies.More infoOK, I agree No, thanks

Dr Helen KlusCosmosBlogTimeline

Imagining the future: Why society needs science fiction

'Leaving the opera in the year 2000' by Albert Robida.

Image credit: Albert Robida/Public domain.

First published on 3rd April 2012. Last updated 11 August 2018 by Dr Helen KlusShow Contents

1. What is science fiction? 

While there’s no single accepted definition of science fiction, science fiction usually deals with worlds that differ from our own as the result of new scientific discoveries, new technologies, or different social systems. It then looks at the consequences of this change. Because of this broad definition, science fiction can be used to consider questions regarding science, politics, sociology, and the philosophy of the mind, as well as any questions about the future.

It’s sometimes hard to distinguish science fiction from fantasy. This is because the definition of science has changed drastically over time, and as Arthur C. Clarke famously stated,

…any sufficiently advanced technology is indistinguishable from magic[1].

One of the greatest astronomers of the 17th century, Johannes Kepler, had to invoke demons to explain how someone could travel to the Moon in his novel Somnium, and 18th century author Samuel Madden used angels to explain time travel from the year 1998 in Memoirs Of the Twentieth Century.

2. A brief history of science fiction 

Since there is no single accepted definition of science fiction, there is no way to say what constitutes the first science fiction story. Most religious texts and poems have elements that are also found in science fiction, especially those that describe the creation or destruction of the universe, and many gods are associated with powers that science fiction has since utilised. Some ancient philosophical texts also have science fiction-like imagery, Plato’s The Republic, for example, discusses realms that we cannot experience with our senses.

Throughout much of human history, society did not change rapidly enough for people to be able to envision a future that was different from their own. At the same time, many parts of the Earth remained unexplored, and this may be why many older science fiction novels were set in the present. Science fiction from this period is also more likely to address social rather than scientific problems, firstly because there was less science to utilise and secondly, because science fiction offered an ideal medium to make social comments that could not be published as fact.

The first novel to involve rocket powered space travel was written by author and duelist Cyrano de Bergerac in the 17th century, shortly after the Copernican revolution. In the 18th century, Voltaire discussed the Earth from the perspective of a super-advanced alien from another star system. In the 19th century, Mary Shelley warned of the dangers of science, Jules Verne depicted scientists as heroes, and H. G. Wells used science fiction to satirise society and make predictions about the future.

Wells’ The World Set Free is perhaps the best example of prophetic science fiction. Published in 1914, Wells described a new type of bomb fuelled by nuclear reactions, he predicted it would be discovered in 1933, and first detonated in 1956. Physicist Leo Szilard read the book and patented the idea[2]. Szilard was later directly responsible for the creation of the Manhattan Project, which led to two nuclear bombs being dropped on Japan in 1945.

In the first half of the 20th century, Yevgeny Zamyatin, Aldous Huxley, and George Orwell provided the first dystopian science fiction, inspired by the Russian Revolutions and two World Wars. In the last half of the century, science fiction writers such as Philip K. Dick, Arthur C. Clarke, William Gibson, and Greg Egan explored the nature of reality and the human mind, through the creation of synthetic life and artificial realities.

Zombie apocalypses are currently popular in science fiction, and this might be because they represent the breakdown and rebuilding of society. This seems apt considering we are living in a time when people from all around the world are protesting against their governments. The gap between the rich and poor is higher than ever before, and we are undergoing a global recession.

There are numerous examples of books that have contributed to the history of science fiction, and these have been summarised by artist Ward Shelley. Some of the best examples are given at the bottom of the article.

Painting depicting the history of science fiction.

The History of Science Fiction (click to enlarge). Image credit: Ward Shelley/Copyrighted, used with permission.

3. Why science fiction is important 

Science fiction is important for at least three reasons. Firstly, by considering worlds that are logically possible, science fiction can be used to explore our place in the universe and consider fundamental philosophical questions about the nature of reality and the mind. Books that explore these issues include Flatland by Edwin Abbott Abbott, Ubik by Philip K. Dick, and 2001: A Space Odyssey by Arthur C. Clarke. Clarke once described science fiction as “the only genuine consciousness-expanding drug”[3].

Secondly, science fiction can inspire more people to become scientists. Edwin Hubble, who provided strong evidence for the big bang theory and was the first person to prove that galaxies exist outside of the Milky Way, was inspired to become a scientist after reading Jules Verne novels[4]. Astronomer and science fiction author Carl Sagan was influenced by Robert A. Heinlein[5], and theoretical physicist Michio Kaku enjoyed the television show Flash Gordon as a child[6a].

Kaku stated,

…years later, I began to realize that the two passions of my life – that is, physics and understanding the future are really the same thing – that if you understand the foundations of physics, you understand what is possible and you understand what could be just beyond the horizon[6b].

Thirdly, and perhaps most importantly, science fiction is the only genre that depicts how society could function differently. This is the first step towards progress as it allows us to imagine the future we want, and consider ways to work towards it. It also makes us aware of futures we wish to avoid, and helps us prevent them.

Perhaps the most famous example of the positive effect of science fiction comes from the inclusion of a multiracial cast on the original Star Trek television series. When Nichelle Nichols, who played Lieutenant Uhura, was considering leaving the series, civil rights leader Martin Luther King Jr. convinced her to stay[7a]. King argued that her inclusion on Star Trek was important because, as a black woman, she helped represent a future people could aspire to, one where people were judged solely on the content of their character.

Shortly after, Nichols publicly criticised NASA for only selecting white male astronauts, she was invited to NASA headquarters and asked to assist in convincing former applicants to reapply[7b]. This led to the selection of Sally Ride and Guion Bluford, who became NASA’s first female and first black American astronauts respectively. NASA’s first female black American astronaut, Mae Jemison, directly cited Star Trek as an influence[8], and later appeared on Star Trek: The Next Generation.

A photograph of Mae Jemison in space.

Mae Jemison. Image credit: NASA/Public domain.

In some ways, society has changed dramatically since Star Trek first aired in 1966. Many things that were once science fiction have already become reality: we have walked on the Moon, we have created clones, and synthetic life, and many people now have access to almost all human knowledge through a device that can fit in their pocket. Technology is progressing so fast that it is changing society, leading to unprecedented moral dilemmas and scientific challenges. This means that science fiction is more important now than ever.

As well as considering the effects of current and developing technologies, science fiction can help address long-term problems, such as global warming. It can help with the development of space exploration, and prepare us for problems we may not anticipate. One day, time travelteleportation, or the genetic engineering of humans may happen, we might communicate with aliens, invent simulated realities, or build intelligent robots, and we’ll be better prepared to deal with these, and other potential dilemmas, if we have already thought about them.

Scientist and science fiction author Isaac Asimov summarised the importance of science fiction in 1978, stating,

It is change, continuing change, inevitable change, that is the dominant factor in society today. No sensible decision can be made any longer without taking into account not only the world as it is, but the world as it will be…Science fiction writers foresee the inevitable, and although problems and catastrophes may be inevitable, solutions are not.

Individual science fiction stories may seem as trivial as ever to the blinder critics and philosophers of today – but the core of science fiction, its essence…has become crucial to our salvation if we are to be saved at all[9].

4. Books that have contributed to the history of science fiction 

~2nd century – True History by Lucian of Samosata
True History is a fictional novel written to satirise stories like Homer’s Odyssey, or Antonius Diogenes’ Of the Wonderful Things Beyond Thule, which involves humans travelling to the Moon. Both present fantastic things as if they are real. Lucian wrote the most fantastical story he could, and used his new world in order to identify problems in the real world. True History involves encounters with life forms from the Sun and the Moon, as well as life forms created by human technology.

~750 – 13th century – One Thousand and One Nights by various authors
One Thousand and One Nights, also known as Arabian Nights, is a collection of stories from the Middle East and South Asia, compiled during the Islamic Golden Age. In The Adventures of Bulukiya, the hero travels across the cosmos, to worlds inhabited by talking snakes and trees. Abdullah the Fisherman and Abdullah the Merman describe an underwater society that practices primitive communism, and mechanical life forms are mentioned in The City of BrassThe Ebony Horse, and Third Qalandar’s Tale.

~15th century – The Voynich manuscript by unknown
While not necessarily science fiction, the Voynich manuscript is perhaps the most mysterious book in the world. Discovered in 1912, and written between 1404 and 1483, it contains over 200 pages of currently undecipherable text, and hundreds of pictures of unidentified species and astronomical charts. A similar book, Codex Seraphinianus, was published in 1981.

Pages from the Voynich manuscript showing plants.

The Voynich manuscript, 15th century. Image credit: The Voynich manuscript/Public domain.

1516 – Utopia by Thomas More
The term utopia is now applied to all depictions of idealised societies. Thomas More’s Utopia is ideal in some ways, but has a strict penal system, with criminals becoming slaves, and those guilty of premarital sex punished with a lifetime of enforced celibacy.

1634 – Somnium by Johannes Kepler
Johannes Kepler was the first person to show that planets move in ellipses, he also calculated the planets relative distances from the Sun, and their orbital speeds. After making these discoveries, Kepler wrote the novel Somnium, which is Latin for ‘The Dream’. In Somnium, an astronomer’s student is transported to the Moon by lunar demons, who are able to travel to Earth during solar eclipses. Kepler describes the effects of gravity, and how the Earth would look from the Moon.

1657 & 1662 – A Voyage to the Moon and A Voyage to the Sun by Cyrano de Bergerac
Writer and duelist Cyrano de Bergerac wrote two science fiction novels in the 17th century. The first involved a trip to the Moon and the second to the Sun. These books mocked the idea that the Earth was the centre of creation, and that only humans possess self-consciousness. A Voyage to the Moon contains the first example of rocket-powered space flight.

1666 – The Blazing World by Margaret Cavendish
In The Blazing WorldMargaret Cavendish’s protagonist describes a passage to another world, with blazing stars in the sky, which can be reached from the North Pole. In this world, she meets all sorts of sentient animals such as Bear-men, Geese-men, and Ant-men. She discusses various scientific theories with them, including atomic theory and, when she hears that her land is under threat, she travels home in a submarine.

1726 – Gulliver’s Travels by Jonathan Swift
Gulliver’s Travels explores philosophical questions regarding politics, morality, and the limits of human knowledge. During his travels, Gulliver encounters flying islands, and meets talking animals, as well as a race of tiny people, and a race of giants.

Cover of the 1856 edition of Gulliver's Travels by Jonathan Swift.

Gulliver’s Travels by Jonathan Swift, 1856 edition. Image credit: Jonathan Swift/Public domain.

1733 & 1763 – Memoirs Of the Twentieth Century and The Reign of George VI, 1900 to 1925 by Samuel Madden
In Memoirs Of the Twentieth Century, an angel provides the narrator with documents from 1998, when George VI is a world Emperor. The Reign of George VI, 1900 to 1925 describes how George VI conquered France and Russia, and explained how the development of canals turned villages into cities.

1752 – Micromégas by Voltaire
In Micromégas, Voltaire describes the Earth from the perspective of a centuries-old, alien genius. The aliens in Micromégas are so large that we appear microscopic to them.

1818 & 1826 – Frankenstein and The Last Man by Mary Shelley
Mary Shelley explored the potential negative effects of scientific advancements. Frankenstein explores the consequences of creating life, and The Last Man is set in the late 21st century, after a plague has wiped out most of the people on Earth.

1865 & 1870 – From the Earth to the Moon and Twenty Thousand Leagues Under the Sea by Jules Verne
Jules Verne wrote adventure stories that involved technology. He included a lot of details to explain how his ideas could one day be possible. In From the Earth to the Moon, Verne describes how three men build and launch a rocket to the Moon, and Twenty Thousand Leagues Under the Sea follows the adventures of a marine biologist in a submarine. Verne also wrote A Journey to the Center of the Earth, and Around the World in Eighty Days.

1872 – Erewhon: or, Over the Range by Samuel Butler
In Erewhon, Samuel Butler describes a fictional land where mechanical life forms undergo evolution. Butler considered the idea that machines may one day be the dominant species on Earth.

1884 – Flatland: A Romance of Many Dimensions by Edwin Abbott Abbott
Flatland describes the adventures of a two-dimensional life form, a square. The first part of the story satirises Victorian society. The second part describes how the square’s perspective is changed after travelling to one-dimensional, and then three-dimensional, worlds. It suggests that there might be higher-dimensional beings that we are unaware of.

Cover of the 1953 edition of Flatland by Edwin Abbott Abbott.

Flatland by Edwin Abbott Abbott, 1953 edition. Image credit: Edwin Abbott Abbott/Public domain.

1888 & 1897 – Looking Backward, 2000 to 1887 and Equality by Edward Bellamy
Looking Backward, 2000 to 1887 follows the story of an American who is hypnotised in the late 19th century, and wakes up in the year 2000 to find that the United States has become a socialist utopia. In its sequel, Equality, Bellamy discusses the role of women in the twentieth century.

1890 – The Twentieth Century: The electric life by Albert Robida
In The Twentieth Century: The electric life, author and artist Albert Robida envisions the year 1955. He described the equality of women, devices to communicate at a distance, screens for displaying visual information, personal aircraft, military submarines, and biological warfare.

1895 – The Time Machine by H. G. Wells
H. G. Wells often used science fiction to satirise society. In The Time Machine, the narrator travels to the future where he finds that people have evolved into two species, the childlike Eloi and the violent Morlocks. He considers that the Eloi could have evolved from the upper class and the Morlocks from the working class, and that both have lost an important part of their humanity. Wells also wrote The War of the WorldsThe Island of Doctor Moreau, and The Invisible Man.

1921 – We by Yevgeny Zamyatin
We is arguably the first dystopian science fiction novel. It was written by Yevgeny Zamyatin in response to the 1905 and 1917 Russian revolutions, and the First World War. Zamyatin describes a totalitarian state that attempts to create a utopia, resulting in a society where people lose their personal identities. People are known by numbers, not names, they never know when the state is watching, and they could be watching at any time.

1931 – Brave New World by Aldous Huxley
Brave New World is set in a dystopian future where much of a person’s identity is decided before they are born, shaped by reproductive technology and social conditioning. Aldous Huxley later considered how a utopia could be formed in his novel Island.

1942 – 1992 – Various works by Isaac Asimov
Most of Isaac Asimov’s stories are set in the same universe, and Asimov charts the evolution of humans from the creation of robots in the 1990s, through to the colonisation of the Galaxy over the next 10,000 years. Asimov first introduced his three laws of robotics in his short story Runaround, first published in 1942.

1948 – Nineteen Eighty-Four by George Orwell
In Nineteen Eighty-Four, George Orwell describes a technologically advanced state that criminalises individuality, and dominates by controlling information.

1961 – Stranger in a Strange Land by Robert A. Heinlein
Stranger in a Strange Land explores what it’s like for a human to come to Earth for the first time, having been born on Mars and raised by Martians. Heinlein also wrote Starship Troopers and The Moon Is a Harsh Mistress.

1968 – Do Androids Dream of Electric Sheep? by Philip K. Dick
In Do Androids Dream of Electric Sheep?, Philip K. Dick considers the moral implications of creating artificial life in a world damaged by nuclear warfare. Dick also wrote an alternative history novel The Man in the High Castle, and considered the nature of reality in Ubik.

1968 – 2001: A Space Odyssey by Arthur C. Clarke
2001: A Space Odyssey was developed as a novel and a film at the same time, and is partly based on some of Clarke’s earlier short stories. It explores the cultural and physical evolution of humans, the development of space exploration, and artificial intelligence.

1984 – Neuromancer by William Gibson
William Gibson describes a world people can physically link their brain to a global computer network, known as the matrix. Neuromancer considers the potential effects of the internet, and popularised the term ‘cyberspace’.

1994 – Permutation City by Greg Egan
In Permutation City, the narrator wakes up to find that he is a Copy, a computer simulation previously recorded from his physical self. Permutation City then addresses philosophical questions regarding human identity, self-consciousness, and reality. It has been used to help explain the many worlds interpretation of quantum mechanics.

2009 – The Windup Girl by Paolo Bacigalupi
The Windup Girl looks at the effects of biotechnology in a world undergoing global warming.

5. References 

Show ReferencesBack to topBack to top

Dr Helen Klus | How We Came to Know the Cosmos | Science Blog | Timeline of the Universe

Copyright | Privacy | Disclaimer | Search | Sitemap

 Texas Heart Institute

 SEARCH

Innovative Technologies & Techniques | Update on Diagnosis and Treatment of Abdominal Aortic Aneurysm (AAA) in Women

In Episode 8 of the The Texas Heart Institute (THI) Education Series, Innovative Technologies & Techniques, Dr. Zvonimir Krajcer and Dr. Stephanie Coulter discusses the diagnosis and treatment of abdominal aortic aneurysm (AAA) in Women. 

The Texas Heart Institute provides a collection of online activities to further the education of physicians and other medical professionals beyond the walls of the institution on the most relevant topics in cardiovascular health and patient care.  Other online resources for professionals and the community include the THI Journal, THI Grand Rounds Archives, and the Heart Information Center

The Institute’s educational activities include postdoctoral and allied training programs, seminars, symposia, grand rounds, scientific publications, and public education outreach.



Facebook
Twitter
LinkedIn

RELATED LINKS

Women and the Threat of AAA

Texas Heart Institute App

Heart Information Center

Innovative Technologies & Techniques | Rationale, Evolution and Result of Fast Track EVAR and TEVAR: Is This Approach a New Standard of Care?

Aneurysm Repair

Innovative Technologies & Techniques

ARTICLE INFO

PublishedDecember 5, 2019Tags

PREV POST

Texas Heart Institute Physician Elected Fellow of the National Academy of Inventors

NEXT POST

Thinking Differently to Advance Women’s Health

SEARCH

Keep up with the latest news and events

THI RESOURCES

FOR MEDICAL PROFESSIONALS

FOR THE PUBLIC

6770 Bertner Avenue
Houston, Texas 77030
832-355-3792

© Texas Heart Institute
All Rights Reserved

Skip to main contentSearchRSS

TRENDING

Live Science is supported by its audience. When you purchase through links on our site, we may earn an affiliate commission. Learn more

  1. Home
  2. News

Physicists Use Bubbling Quantum Vacuum to Hopscotch Heat Across Empty Space

By Rafi Letzter – Staff Writer a day ago

Heat isn’t supposed to move like this.

  •  
  •  
  •  
  •  
  •  
  •  
A photo shows the experimental device in which the never-before-seen effect took place.

A photo shows the experimental device in which the never-before-seen effect took place.(Image: © Violet Carter, UC Berkeley)

When you touch a hot surface, you’re feeling movement. If you press your hand against a mug of tea, warmth spreads through your fingers. That’s the sensation of billions of atoms banging together. Tiny vibrations carry thermal energy from the water to the mug and then into your skin as one molecule knocks into the next, sending it careening into a third — and so on down the line. 

Heat can also cross space as waves of radiation, but without radiation, it needs stuff to pass through — molecules to bang into other molecules. Vacuums have no “stuff” in them, so they tend to trap heat. In Earth’s orbit, for example, one of the biggest engineering challenges is figuring out how to cool down a rocket ship.

But now, researchers have shown that, on microscopic scales, this isn’t really true. In a new paper published Dec. 11 in the journal Nature, physicists showed that little vibrations of heat can cross hundreds of nanometers of empty space. Their experiment exploited an uncanny feature of the quantum vacuum: It isn’t really empty at all.

Related: What’s That? Your Physics Questions Answered

“We showed that two objects are able to ‘talk’ to each other across an empty space of, for example, hundreds of nanometers,” said Hao-Kun Li, co-lead author of the study. Li is a physicist at Stanford University who worked on this research while he was a doctoral student at the University of California, Berkeley.

Hundreds of nanometers is an infinitesimal space in human terms — a few thousandths of a millimeter, or a bit bigger than a typical virus. But that’s still far too large a gap for heat to cross, at least according to the simple models of heat transfer.

In 2011, researchers began to speculate that the quantum vacuum itself might be able to carry the molecular vibrations of heat. A paper published in the journal Applied Physics Letters pointed out that, in quantum physics, the vacuum is understood as a place roiling with energy. Random fluctuations of matter and energy pop into being and then disappear, generally at scales far smaller than people can imagine.

Those fluctuations are chaotic and unpredictable. But they could act like stepping stones to carry a wave of heat — in the form of a quantum excitation known as a phonon — across a gap. If you were a phonon setting out to cross a wide gap of, say, a few inches, the odds of the right fluctuations happening in the right order to get you across would be so low that the endeavor would be pointless.

But shrink the scale, the researchers showed, and the odds improve. At about 5 nanometers, this weird quantum hopscotch would become the dominant way to transfer heat across empty space — outpacing even electromagnetic radiation, previously thought to be the only way for energy to cross a vacuum.(Image credit: Zhang Lab, UC Berkeley)

Still, those researchers predicted the effect would be significant only up to a scale of about 10 nanometers. But seeing anything on a 10-nanometer scale is difficult.

“When we designed the experiment, we realized this cannot easily be done,” Li told Live Science.

Even if the effect happens, the spatial scale is so small that there’s no good way to measure it conclusively. To produce the first direct observation of heat crossing a vacuum, the UC Berkeley physicists figured out how to scale the experiment way up.

“We designed an experiment that uses very soft mechanical membranes,” meaning they are very elastic, or stretchy, Li said.

If you pluck a rigid steel guitar string, he explained, the resulting vibrations will be much smaller than those you’d see if you plucked a more elastic nylon guitar string with the same strength. The same thing happened on the nanoscale in the experiment: Those ultra-elastic membranes allowed the researchers to see tiny heat vibrations that  otherwise would not have been visible. By carefully bouncing light off those membranes, the researchers were able to observe phonons of heat crossing the still-minuscule gap.

Down the road, Li said, this work might turn out to be useful — both to folks building regular computers and to quantum-computer designers.

A key problem in building better and faster microchips is figuring out how to disperse heat from circuits clustered into tiny spaces, Li said.

“Our finding actually implies that you could engineer the vacuum to dissipate heat from computer chips or nanoscale devices,” he said.

If you were to tune the vacuum by properly shaping it with the right materials, it might — far in the future — become more effective at pulling heat off a chip than any existing medium, he said.

The techniques the researchers employed could also be used to entangle the phonons — the vibrations themselves — across different membranes. That would link the phonons on a quantum level in the same way quantum physicists already link photons, or light particles, that are separated in space. Once linked, the phonons could be used to store and transfer quantum information, to function as the “mechanical qubits” of a hypothetical quantum computer. And once cooled down, he said, the phonons should be even more efficient at long-term data storage than traditional qubits.

Originally published on Live Science.

How It Works Banner
Want more science? Get a subscription of our sister publication “How It Works” magazine, for the latest amazing science news.  (Image credit: Future plc)

MORE ABOUT…

Huge New Storm Creates Hexagon at Jupiter’s South PoleHow Big Can Lightning Get?SEE MORE RELATED

LATEST

The Most Interesting Science News Articles of the WeekSEE MORE LATEST2 COMMENTSCOMMENT FROM THE FORUMS

  • Little Ev 14 December 2019 06:15Interesting, and it just goes to show, different things can happen at different scales. Don’t make assumptions about something just because it happened, test if it still works at different scales:)REPLY
  • tacole 14 December 2019 21:34I’m curious about what is happening to phonons when the distance to the next object is further than the maximum distance at which this effect occurs.

    Is there still an interaction between a hot surface and the virtual particles at the interface between the surface and a vacuum? If so, what happens to that phonon when a…Read MoreREPLY

VIEW ALL 2 COMMENTS Como os personagens da Disney se pareceriam na vida real24/7 Mirror|SponsoredThis Is Where Al Pacino Has to Live At 80 Years OldTrading Blvd|SponsoredBariátrica natural? Conheça a pílula que “amarra” o estômago e diminui o apetite em poucas semanas.PhytoPower Caps|SponsoredJovem ensina inglês em 8 semanas e fica famoso na internetMétodo Inglês Rápido|SponsoredO melhor relógio conectado do mundo é vendido 5 vezes mais barato no BrasilXWatch|SponsoredThere Are Thousands of Tardigrades on the Moon. Now What?LivescienceGoogle’s Quantum Computer Just Aced an ‘Impossible’ TestLivescience

MOST POPULARScientists Found the Deepest Land on Earth Hiding Beneath Antarctica’s Ice

By Rafi LetzterDecember 13, 2019READ MOREThe Most Amazing Science Images of the Week

By Jeanna BrynerDecember 13, 2019READ MOREDazzling ‘Temple of Colored Marbles’ Honoring Roman God Discovered in Italy

By Owen JarusDecember 13, 2019READ MOREThe Science Behind Your (Irrational) Fear of Friday the 13th

By Kimberly HickokDecember 13, 2019READ MOREThousands of 10-Inch ‘Penis Fish’ Washed Up on a California Beach

By Brandon SpecktorDecember 13, 2019READ MORELightning Bolts Create Glowing Auroral ‘Elves’ and Brilliant Gamma-Ray Flashes

By Yasemin SaplakogluDecember 13, 2019READ MORELightning Strikes Almost Killed the Apollo 12 Mission

By Mindy WeisbergerDecember 13, 2019READ MOREA Dark River Nearly 1,000 Miles Long May Be Flowing Beneath Greenland’s Ice

By Mindy WeisbergerDecember 12, 2019READ MOREOur Large, Adult Galaxy Is As Massive As 890 Billion Suns

By Rafi LetzterDecember 12, 2019READ MORE‘Lost’ Iron Meteorites May Lurk Beneath Antarctic Ice. Scientists on Quest to Find Them.

By Tom MetcalfeDecember 12, 2019READ MOREThe IRS Accidentally Set Up a Health Insurance Experiment and Saved Lives

By Rachael RettnerDecember 12, 2019READ MOREAdvertisement

SIGN UP FOR E-MAIL NEWSLETTERS

Get breaking science news on monster snakes and dinosaurs, aliens, spooky particles and more!No spam, we promise. You can unsubscribe at any time and we’ll never share your details without your permission.AdvertisementMOST READMOST SHARED

  1. 1A Dark River Nearly 1,000 Miles Long May Be Flowing Beneath Greenland’s Ice
  2. 2Dazzling ‘Temple of Colored Marbles’ Honoring Roman God Discovered in Italy
  3. 3Vast Field of Mysterious, Perfectly Circular Holes Dot the Seafloor Off California’s Coast
  4. 4The First Evidence of ‘Head Cones’ Found in 3,300-Year-Old Egyptian Tomb
  5. 5Physicists Use Bubbling Quantum Vacuum to Hopscotch Heat Across Empty Space

Advertisement

Live Science is part of Future US Inc, an international media group and leading digital publisher. Visit our corporate site.

© Future US, Inc. 11 West 42nd Street, 15th Floor, New York, NY 10036.

Futurism logo

/Futurism/The Byte/Neoscope+Videos+Newsletter+SocialTopicsSearchAbout

the byte logo

The News Today __ 12.14.19ADVANCED TRANSPORTWATCH YOUTUBERS BUILD A FUNCTIONAL HALF-SCALE CYBERTRUCK, BECAUSE WHY NOT?DECEMBER 13TH 2019HARD SCIENCESTARE INTO THE BRUTAL MAW OF THIS BARBAROUS NEUTRON STARDECEMBER 13TH 2019ROBOTS & MACHINESTHESE ADORABLE 3D-PRINTED HOUSES COULD SOLVE HOMELESSNESSDECEMBER 13TH 2019HARD SCIENCESCIENTISTS SPOT ONE OF THE MOST DISTANT GALAXIES EVER: MAMBO-9DECEMBER 13TH 2019HARD SCIENCEOUR GALAXY MIGHT HAVE — TWO! — BLACK HOLES AT ITS CENTERDECEMBER 13TH 2019ARTIFICIAL INTELLIGENCEREPORT: FACIAL RECOGNITION SHOULD BE BANNED FROM EVERYDAY LIFEDECEMBER 13TH 2019HARD SCIENCERUSSIA IS WORKING ON ITS OWN PLAN TO BLOW UP KILLER ASTEROIDSDECEMBER 13TH 2019ADVANCED TRANSPORTA STARTUP WANTS TO TURN YOUR CAR’S WINDSHIELD INTO A GIANT SCREENDECEMBER 13TH 2019FUTURE SOCIETY35 COWS LIVE ON A FLOATING DAIRY FARM STAFFED BY ROBOTSDECEMBER 13TH 2019ADVANCED TRANSPORTTESLA: CYBERTRUCKS COULD WEIGH UP TO 10,000 POUNDSDECEMBER 13TH 2019HARD SCIENCEA RESEARCHER GREW 97 KINDS OF BROCCOLI AND SEQUENCED THEIR DNADECEMBER 13TH 2019FUTURE SOCIETYADORABLE PHYSICS PROF FINALLY GETS HIS VIRAL MOMENT OF FAMEDECEMBER 13TH 2019////////////OCTOBER 14TH 19__JON CHRISTIAN__FILED UNDER: HARD SCIENCE

Thruster Buster

New Scientist is reporting that NASA engineer David Burns is making some bold claims about a conceptual new spaceship thruster he calls the “helical engine” — a concept the magazine admits “may violate the laws of physics.”

“The engine itself would be able to get to 99 per cent the speed of light if you had enough time and power,” Burns told New Scientist.

Light Speed

The engine, described in a recent paper Burns posted to a NASA server, takes advantage of a weird glitch in Einsteinian physics.

By accelerating a loop of ions to nearly light speed and then manipulating their velocity — and hence, because of the laws of relativity, their mass — the engine achieves the ultimate space travel free lunch: forward thrust without shooting anything out behind.

Caveat Engine

Even if the engine works in practice, it’ll have other disadvantages. According to New Scientist, a helical engine that was 200 meters long would generate about as much force as typing on a keyboard — so, while Burns may be right that it could accelerate to near-light speed, it would take a very long time.

“I’m comfortable with throwing it out there,” Burns told the magazine. “If someone says it doesn’t work, I’ll be the first to say, it was worth a shot.”

READ MORE: NASA engineer’s ‘helical engine’ may violate the laws of physics [New Scientist]

More on light speed: Physicists Have a New Idea for Faster-Than-Light Travel

PEER REVIEWED CONTENT FROM PARTNERS WE LOVE

  1. CareDx Expands AlloSure Transplant Test to Lung, Starting With Compassionate Use360Dx, 2019
  2. Interpace Diagnostics Q1 Revenues Rise 25 PercentLeo O’Connor et al., 360Dx, 2019
  3. AstraZeneca Liquid Biopsy Comparison Details Concordance FailuresMolika Ashford, 360Dx, 2019
  1. Progentec Diagnostics Awarded $225K Grant for Lupus Blood Test Developmentstaff reporter, 360Dx, 2019
  2. NCCN Updates Prostate Cancer Early Detection Guidelines With Biomarker Information360Dx, 2019
  3. Exact Sciences to Acquire Genomic Health for $2.8B; Q2 Revenues Rise 94 PercentChristie Rizk et al., 360Dx, 2019

Powered by

LUDICROUS SPEED

NASA ENGINEER SAYS NEW THRUSTER COULD REACH 99% SPEED OF LIGHT

IMAGE VIA PIXABAY/KEVIN SANDERSON/TAG HARTMAN-SIMKINS

Log InSubscribe

NASA has pinpointed an area where astronauts could land on Mars. Ice is so accessible there that they could dig it up with a shovel.

Morgan McFall-Johnsen Dec 12, 2019, 3:42 PM

Mars
A NASA image of Mars. 

NASA scientists just discovered a vast region of Mars where water ice sits just an inch below the surface.

It could be the perfect place for astronauts to land, since any crew that touches down on the red planet would have to mine resources there, and water is the most important one. Mars astronauts will need to dig up ice to make drinking water and to create rocket fuel for the journey back to Earth — when you break down water into oxygen and hydrogen, the latter can be used to make fuel.

“Bringing your own water from Earth would be incredibly expensive,” Sylvain Piqueux, the NASA planetary scientist who led the research, told Business Insider. “Everything that you don’t have to bring with you leaves more room for a science experiment or additional engineering capabilities.”

‘If you bring a rake or a shovel, you could access it’

mars water ice region
The annotated area of Mars in this illustration holds near-surface water ice that would be easily accessible to astronauts. 

Piqueux and his team discovered the ice field when they examined data from NASA’s Mars Reconnaissance Orbiter (MRO) and Mars Odyssey orbiter.

To their surprise, this ice appears to be as little as an inch below the planet’s surface.

“If you bring a rake or a shovel, you could access it. You wouldn’t need to bring heavy equipment to access it,” Piqueux said. “In this case, the surprise was that ice was right there, so shallow.”

Planning for a serious digging operation as part of any space mission requires heavy equipment, which means additional fuel would be needed for a launch. Plus, digging on Mars isn’t easy, as the InSight lander‘s heat probe discovered when it started trying to do that in February. The so-called “mole” instrument is supposed to dig 16 feet down, but it only made a few inches of progress before getting stuck.

mars insight mole
InSight’s heat probe, or mole, backed about halfway out of the hole it had burrowed, October 26, 2019. 

Then the mole mysteriously popped out of its hole in October. It’s been unable to dig deeper ever since.

NASA found water ice in an unexpected place

Mars has lots of surface water ice at its poles. But those poles aren’t a great place to send astronauts, since they’re freezing cold and shrouded in darkness for half the year.

mars ice cap
Mars’ polar ice cap. 

NASA doesn’t want to land astronauts at a spot near the planet’s equator, either, because it’s too warm for ice to exist. Unlike Earth, Mars doesn’t have a thick enough atmosphere to host liquid water. When ice gets too warm, the water skips the liquid stage altogether and simply evaporates.

But the subsurface ice that Piqueux discovered is in a temperate area that’s just right for astronauts to land.

“Part of the excitement of this work is that we found ice in mid-latitudes,” Piqueux said. “We knew that there was ice potentially at those latitudes from other instruments, but we also thought that it would be located much deeper.”

Piqueux thinks this ice may have come from an ancient snowfall that was followed by a dust storm, which quickly blanketed the snow. That thin layer of dust could have preserved the frozen water for billions of years. 

Now that NASA researchers have pinpointed this area with plentiful water resources, they can start investigating potential landing sites there. They’ll search for a more specific place that’s safe for humans to land and also offers new opportunities for studying Mars’ surface and potential to host alien life. 

“It’s the beginning of putting together those parts of the puzzle for Mars,” Piqueux said.

SEE ALSO: A stunning animation by a planetary scientist shows how huge our solar system is — and why that makes it so hard to depict

NOW WATCH: 6 scientists spent a year in a mock Mars habitat in Hawaii with no fresh water

More: SpaceNASAMarsAstronauts  Taboola Feed Most Expensive Dogs in the World, RankedWork + MoneyIs she the most beautiful woman in the world ?EasyvoyageAl Pacino Is 80 Years Old & Where He Lives Now Is Sad to SeeTrading BlvdComo os personagens da Disney se pareceriam na vida real24/7 MirrorErva chinesa que reduz açúcar no sangue invade São José Do Rio PretoGc 99by TaboolaVIDEOS YOU MAY LIKEWhy astrophysicists think there’s a black hole in our solar systemHow Disney’s animation evolved from ‘Frozen’ to ‘Frozen 2’Alibaba cofounder Jack Ma is the richest man in China — here’s how he spends his $38 billion net worthPsychologists debunk 25 mental-health mythsby TaboolaFROM THE WEBO jogo mais viciante do ano!Forge of Empires – Jogo Online GrátisBariátrica em cápsula seca a gordura, tira o inchaço e vira febre em São José Do Rio PretoPhytoPower Caps

 

FIND A JOB

Tech JobsC-Level Jobs
Media JobsDesign Jobs
Finance JobsSales Jobs

See All Jobs »Download on the App StoreGet it on Google Play

 


60 Minutes

GO

  •  
  •  
  •  
  •  
  •  

A Harvard geneticist’s goal: to protect humans from viruses, genetic diseases, and aging

George Church’s lab at Harvard Medical School is working to make humans immune to all viruses, eliminate genetic diseases and reverse the aging process. Scott Pelley reports on how close the geneticist’s team is to a breakthrough.

  • 2019Dec 08
  • CORRESPONDENTScott Pelley
  • FACEBOOK
  • TWITTER
  • REDDIT
  • FLIPBOARD

Our lives have been transformed by the information age. But what’s coming next is likely to be more profound, call it the genetic information age. We have mapped the human genome and in just the last few years we have learned to read and write DNA like software. And you’re about to see a few breakthroughs-in-waiting that would transform human health. For a preview of this revolution in evolution we met George Church, a world leading geneticist, whose own DNA harbors many eccentricities and a few genes for genius.

We found George Church in here.

Cory Smith: Most of these are frozen George. Little bits of George that we have edited all in different tubes.

Church threw himself into his work, literally. His DNA is in many of the experiments in his lab at Harvard Medical School. The fully assembled George Church is 6’5″ and 65. He helped pioneer mapping the human genome and editing DNA. Today, his lab is working to make humans immune to all viruses, eliminate genetic diseases, and  reverse the effects of time.

george-3.jpg
Harvard geneticist George Church

Scott Pelley: One of the things your lab is working on is reversing aging.

George Church: That’s right.

Scott Pelley: How is that possible?

George Church: Reversing aging is one of these things that is easy to dismiss to say either we don’t need it or is impossible or both.

Scott Pelley: Oh, we need it.

George Church: Okay. We need it. That’s good. We can agree on that. Well, aging reversal is something that’s been proven about eight different ways in animals where you can get, you know, faster reaction times or, you know, cognitive or repair of damaged tissues.

Scott Pelley: Proven eight different ways. Why isn’t this available?

George Church: It is available to mice.

In lucky mice, Church’s lab added multiple genes that improved heart and kidney function and levels of blood sugar. Now he’s trying it in spaniels.

Scott Pelley: So is this gene editing to achieve age reversal?

George Church: This is adding genes. So, it’s not really editing genes. It’s, the gene function is going down, and so we’re boosting it back up by putting in extra copies of the genes.

Scott Pelley: What’s the time horizon on age reversal in humans?

George Church: That’s in clinical trials right now in dogs. And so, that veterinary product might be a couple years away and then that takes another ten years to get through the human clinical trials.

Human trials of a personal kind made George Church an unlikely candidate to alter human evolution. Growing up in Florida, Church was dyslexic, with attention deficit, and frequently knocked out by narcolepsy.

Scott Pelley: What was it that made you imagine that you could be a scientist?

George Church: The thing that got me hooked was probably the New York World’s Fair in 1964. I thought this is the way we should all be living. When I went back to Florida, I said, “I’ve been robbed,” you know? “Where is it all?” So, I said, “Well, if they’re not going to provide it, then I’m gonna provide it for myself.”

With work and repetition, he beat his disabilities and developed a genius for crystallography, a daunting technique that renders 3D images of molecules through X-rays and math. But in graduate school at Duke, at the age of 20, his mania for the basic structures of life didn’t leave time for the basic structure of life.

Scott Pelley: You were homeless for a time.

George Church: Yeah. Briefly.

Scott Pelley: Six months.

George Church: Six months.

Scott Pelley: And where were you sleeping when you were homeless?

George Church: Well, yeah. I wasn’t sleeping that much. I was mostly working. I’m narcoleptic. So, I fall asleep sitting up anyway.

His devotion to crystallography was his undoing at Duke.

George Church: I was extremely excited about the research I was doing. And so, I would put in 100-plus hours a week on research and then pretty much didn’t do anything else.

Scott Pelley: Not go to class.

George Church: I wouldn’t go to class. Yeah.

Duke kicked him out with this letter wishing him well in a field other than biology. But, it turned out, Harvard needed a crystallographer. George Church has been here nearly 40 years. He employs around 100 scientists, about half-and-half men and women.

Scott Pelley: Who do you hire?

George Church: I hire people that are self-selecting, they see our beacon from a distance away. There are a lot of people that are a little, you know, might be considered a little odd. “Neuroatypicals,” some of us are called.

Scott Pelley: “Neuroatypical?”

George Church: Right.

Scott Pelley: Unusual brains?

George Church: Right, yeah.

students.jpg
Some of Church’s “Neuroatypicals”

Parastoo Khoshakhlagh: One thing about George that is very significant is that he sees what you can’t even see in yourself.

Parastoo Khoshakhlagh and Alex Ng are among the “neuroatypicals.” They’re engineering human organ tissue.

Cory Smith: I think he tries to promote no fear of failure. The only fear is not to try at all.

Cory Smith’s project sped up DNA editing from altering three genes at a time to 13,000 at a time. Eriona Hysolli went to Siberia with Church to extract DNA from the bones of wooly mammoths. She’s editing the genes into elephant DNA to bring the mammoth back from extinction.

Eriona Hysolli: We are laying the foundations, perhaps, of de-extinction projects to come.

Scott Pelley: De-extinction.

Eriona Hysolli: Yes.

Scott Pelley: I’m not sure that’s a word in the dictionary yet.

Eriona Hysolli: Well, if it isn’t, it should be.

Scott Pelley: You know there are people watching this interview who think that is playing God.

George Church: Well, it’s playing engineer. I mean, humans have been playing engineer since the dawn of time.

Scott Pelley: The point is, some people believe that you’re mucking about in things that shouldn’t be disturbed.

George Church: I completely agree that we need to be very cautious. And the more powerful, or the more rapidly-moving the technology, the more cautious we need to be, the bigger the conversation involving lots of different disciplines, religion, ethics, government, art, and so forth. And to see what it’s unintended consequences might be.

Church anticipates consequences with a full time ethicist in the lab and he spends a good deal of time thinking about genetic equity. Believing that genetic technology must be available to all, not just those who can afford it. 

lab-5.jpg

We saw one of those technologies in the hands of Alex Ng and Parastoo Khoshakhlagh. They showed us what they call “mini-brains,” tiny dots with millions of cells each. They’ve proven that cells from a patient can be grown into any organ tissue, in a matter of days, so drugs can be tested on that patient’s unique genome.

Scott Pelley: You said that you got these cells from George’s skin? How does that work?

Alex Ng: We have a way to reprogram essentially, skin cells, back into a stem cell state. And we have technologies where now we can differentiate them into tissue such as brain tissue.

Scott Pelley: So you went from George’s skin cells, turned those into stem cells, and turned those into brain cells.

Alex Ng: Exactly. Exactly.

Scott Pelley: Simple as that.

Organs grown from a patient’s own cells would eliminate the problem of rejection. Their goal is to prove the concept by growing full sized organs from Church’s DNA.

George Church: It’s considered more ethical for students to do experiments on their boss than vice versa and it’s good to do it on me rather than some stranger because I’m as up to speed as you can be on the on the risks and the benefits. I’m properly consented. And I’m unlikely to change my mind.

Alex Ng: We have a joke in the lab, I mean, at some point, soon probably, we’re going to have more of his cells outside of his body than he has himself.

Church’s DNA is also used in experiments designed to make humans immune to all viruses.

George Church: We have a strategy by which we can make any cell or any organism resistant to all viruses by changing the genetic code. So if you change that code enough you now get something that is resistant to all viruses including viruses you never characterized before.

Scott Pelley: Because the viruses don’t recognize it anymore?

George Church: They expect a certain code provided by the host that they replicate in.  the virus would have to change so many parts of its DNA or RNA so that it can’t change them all at once. So, it’s not only dead. But it can’t mutate to a new place where it could survive in a new host.

georgechurchmainarticle.jpg
Correspondent Scott Pelley with George Church

Yes, he’s talking about the cure for the common cold and the end of waiting for organ transplants. It’s long been known that pig organs could function in humans. Pig heart valves are routinely transplanted already. But pig viruses have kept surgeons from transplanting whole organs. Church’s lab altered pig DNA and knocked out 62 pig viruses.

Scott Pelley: What organs might be transplanted from a pig to a human?

George Church: Heart, lung, kidney, liver, intestines, various parts of the eye, skin. All these things.

Scott Pelley: What’s the time horizon on transplanting pig organs into human beings?

George Church: you know, two to five years to get into clinical trials. And then again it could take ten years to get through the clinical trials.

Church is a role model for the next generation. He has co-founded more than 35 startups. Recently, investors put $100 million into the pig organ work. Another Church startup is a dating app that compares DNA and screens out matches that would result in a child with an inherited disease.

George Church: You wouldn’t find out who you’re not compatible with. You’ll just find out who you are compatible with.

Scott Pelley: You’re suggesting that if everyone has their genome sequenced and the correct matches are made, that all of these diseases could be eliminated?

George Church: Right. It’s 7,000 diseases. It’s about 5% of the population. It’s about a trillion dollars a year, worldwide.

Church sees one of his own genetic differences as an advantage. Narcolepsy lulls him several times a day. But he wakes, still in the conversation, often, discovering inspiration in his twilight zone.

Scott Pelley: If somebody had sequenced your genome some years ago, you might not have made the grade in some way.

George Church: I mean, that’s true. I would hope that society sees the benefit of diversity not just ancestral diversity, but in our abilities. There’s no perfect person.

Despite imperfection, Church has co-authored 527 scientific papers and holds more than 50 patents. Proof that great minds do not think alike.

The best science can tell, it was about 4 billion years ago that self-replicating molecules set off the spark of biology. Now, humans hold the tools of evolution, but George Church remains in awe of the original mystery: how chemistry became life.

Scott Pelley: Is the most amazing thing about life, then, that it happened at all?

George Church: It is amazing in our current state of ignorance. We don’t even know if it ever happened ever in the rest of the universe. it’s awe-inspiring to know that it either happened billions of times, or it never happened. Both of those are mind boggling. It’s amazing that you can have such complex structures that make copies of themselves. But it’s very hard to do that with machines that we’ve built. So, we’re engineers. But we’re rather poor engineers compared to the pseudo engineering that is biological evolution.

Produced by Henry Schuster. Associate producer, Rachael Morehouse. Broadcast associate, Ian Flickinger.© 2019 CBS Interactive Inc. All Rights Reserved.

  • Scott PelleyCorrespondent, “60 Minutes”

Recent Segments

  •  
  •  
  •  
CBSNews.com
CBS Interactive
Follow Us

CBS NewsSearch

Copyright © 2019 CBS Interactive Inc.
All rights reserved.

Log InSubscribe

Astronauts just printed meat in space for the first time — and it could change the way we grow food on Earth

Aria Bendix Oct 8, 2019, 4:38 PM

space rocket
A space craft named “Soyuz MS-15” takes off for the International Space Station with meat cells onboard. 
  • Russian cosmonauts on the International Space Station just printed meat in space for the first time.
  • On September 25, the Israeli food-tech startup Aleph Farms loaded a spacecraft with vials of cow cells.
  • When the cells arrived at the space station, cosmonauts fed them into a 3D printer, which produced thin steaks.
  • The experiment is a sign that meat could be grown in harsh environments on Earth.
  • Visit Businessinsider.com for more stories.

Space food is notoriously lackluster, but new technology is slowly revolutionizing the way astronauts eat. Whereas the first astronauts in space squeezed their meals from toothpaste-like tubes, today’s astronauts chow down on ice cream and fresh fruit, and season their meals with liquid salt and pepper.

But there are still limits to the types of food that can withstand microgravity. Anything that can produce crumbs, for instance, is considered dangerous, since food particles can clog a spacecraft’s electrical systems or air filters. Food also needs to last for an extended period of time, in case resupply missions go awry.

So tech companies are experimenting with ways to grow food onboard a spacecraft.

In late September, the Israeli food-tech startup Aleph Farms oversaw the growth of meat in space for the first time, with the help of a 3D printer. The experiment isn’t entirely new — Aleph Farms has been cooking up lab-grown steaks since December 2018 — but it does suggest that meat could be grown in all kinds of harsh environments.

Cosmonauts fed meat cells into a 3D printer

lab grown meat iss International Space Station
Cosmonaut Oleg Kononenko on board of the International Space Station during the first experiment with 3D bioprinter in December 2018. 

To make their lab-grown meat, Aleph Farms starts by extracting cells from a cow through a small biopsy. The cells are then placed in a “broth” of nutrients that simulates the environment inside a cow’s body. From there, they grow into a thin piece of steak.

Those who’ve tasted the product say it leaves something to be desired, but it’s meant to mimic the texture and flavor of traditional beef.

“We’re the only company that has the capacity to make fully-textured meat that includes muscle fibers and blood vessels — all the components that provide the necessary structure and connections for the tissue,” Aleph’s CEO and co-founder, Didier Toubia, told Business Insider last year.

But to grow the meat in space, Aleph Farms had to alter their process slightly.

First, they placed the cow cells and nutrient broth in closed vials. Next, they loaded the vials onto the Soyuz MS-15 spacecraft in Kazakhstan. On September 25, the spacecraft took off for the Russian segment of the International Space Station, orbiting about 250 miles away from Earth.

3d printer in space.JPG
Cosmonaut Oleg Kononenko on board the International Space Station during the first experiment with the 3D bioprinter in December 2018. 

When the vials arrived at the station, Russian astronauts — known as cosmonauts — inserted them into a magnetic printer from the Russian company 3D Bioprinting Solutions. The printer then replicated those cells to produce muscle tissue (the “meat”). The samples returned to Earth on October 3, without being consumed by the cosmonauts.

“This experiment was strictly proof of concept,” Grigoriy Shalunov, a project manager at 3D Bioprinting Solutions, told Business Insider. In the future, he said, the company hopes to provide a protein source for deep space missions and initial colonies on the moon and Mars.

The experiment isn’t the first time food has been artificially grown in space. In 2015, astronauts grew romaine lettuce on the International Space Station. NASA is now developing a “space garden” that can produce lettuce, strawberries, carrots, and potatoes on the Gateway, a proposed space station that could orbit the moon.

The experiment is a sign that meat could be grown anywhere on Earth

The ability to print meat in microgravity isn’t just good news for astronauts. It also suggests that companies could print meat in extreme environments on Earth — particularly in places where water or land is scarce.

Read more: This is what it’s like to eat food grown in a ‘space garden’

space meat
Aleph Farms’ slaughter-free steaks. 

Normally, it takes up to 5,200 gallons of water to produce a single 2.2-pound steak (the large slabs typically sold at the grocery store). But growing cultured meat uses about 10 times less water and land than traditional livestock agriculture. Lab-grown meat is also quicker to produce — Aleph Farms calls its product a “minute steak,” because it takes just a couple of minutes to cook.

The need to produce more food while conserving natural resources is more pressing than ever. A recent report from the United Nations’ Intergovernmental Panel on Climate Change found that our food industry — including the land and resources required to raise livestock — produces 37% of global greenhouse-gas emissions.

In a statement to Business Insider, Aleph Farms said its space experiment was a direct response to these challenges.

“It is time Americans and Russians, Arabs and Israelis rise above conflicts, team up, and unite behind science to address the climate crisis and food security needs,” the company said. “We all share the same planet.”

SEE ALSO: Our food system accounts for a whopping 37% of greenhouse-gas emissions, a UN report found. But it could also offer a solution to the climate crisis.

NOW WATCH: What happens to the human body in space

More: lab meatlab-grown meatAleph Farmsfake meat  Taboola Feed Most Expensive Dogs in the World, RankedWork + MoneyIs she the most beautiful woman in the world ?EasyvoyageAl Pacino Is 80 Years Old & Where He Lives Now Is Sad to SeeTrading BlvdComo os personagens da Disney se pareceriam na vida real24/7 MirrorErva chinesa que reduz açúcar no sangue invade São José Do Rio PretoGc 99by TaboolaVIDEOS YOU MAY LIKEWhy astrophysicists think there’s a black hole in our solar systemAll the characters in the new ‘Black Widow’ trailer from Marvel, explainedWhy Spanish Iberian ham is the world’s most expensive cured meatPsychologists debunk 25 mental-health mythsby TaboolaFROM THE WEBBariátrica em cápsula seca a gordura, tira o inchaço e vira febre em São José Do Rio PretoPhytoPower CapsO jogo mais viciante do ano!Forge of Empires – Jogo Online Grátis

 

FIND A JOB

Tech JobsC-Level Jobs
Media JobsDesign Jobs
Finance JobsSales Jobs

See All Jobs »Download on the App StoreGet it on Google Play

 

FDA approves first fish-oil drug for cutting cardiac risks

  • SHARE THIS  —
  •  
  •  
  •  
  •  

 

SECTIONS

TV

FEATURED

MORE FROM NBC

FOLLOW NBC NEWS

  •  
  •  
  •  

HEART HEALTH

FDA approves first fish-oil drug for cutting cardiac risks

In patient testing, the drug reduced risks of potentially deadly complications including heart attacks and strokes about 25 percent.

Image: A capsule of the purified, prescription fish oil Vascepa

The FDA approved Vascepa for a wider group of patients.Amarin / APDec. 13, 2019, 9:05 PM -02By Associated Press

U.S. regulators on Friday approved expanded use of a fish oil-based drug for preventing serious heart complications in high-risk patients already taking cholesterol-lowering pills.

Vascepa was approved years ago for people with sky-high triglycerides, a type of fat in blood. The Food and Drug Administration allowed its use in a far bigger group of adults with high, but less extreme, triglyceride levels who have multiple risk factors such as heart disease and diabetes.

In patient testing, it reduced risks of potentially deadly complications including heart attacks and strokes about 25 percent.

Prescription drug with fish oil reduces risk of heart attack or stroke, study finds

NOV. 10, 201801:43

Amarin, the drug’s maker, set a list price of $303.65 per month. What patients pay will vary by insurance, and Amarin said it will offer financial help.

The Irish drugmaker estimates the new approval makes Vascepa, which is pronounced vas-EE’-puh and also is called icosapent ethyl, appropriate for up to 15 million U.S. patients.

High triglycerides can clog arteries and boost chances of developing heart disease, suffering heart attacks or strokes, needing a bypass or artery-clearing procedure, or being hospitalized for chest pain — just like high cholesterol and elevated blood pressure can do.

Amarin funded a five-year study of nearly 8,200 patients at high cardiac risk who were already taking medicines to lower bad cholesterol or control diabetes. The half who took Vascepa capsules along with those medicines had a 25 percent lower chance of heart complications and a 20 percent lower risk of death, compared with those adding dummy capsules of mineral oil to their medicines.

Slightly more patients getting Amarin’s drug had an irregular heartbeat than those taking the dummy capsules, but other side effects were minor.

Recommended

 

U.S. NEWSFlorida man died from meth overdose before he was eaten by alligator

 

U.S. NEWSA woman’s doctor prescribed a morning-after pill. Pharmacists refused to fill it, suit says.

Previous studies testing other fish oil drugs to cut cardiac risk had flopped.

Heart disease affects an estimated 121 million American adults, causes about one in three deaths and costs more than $500 billion annually for treatment, according to the American Heart Association. Millions of Americans take nonprescription supplements of fish oil, also called omega-3 fatty acids, for their supposed heart benefits, but their dosages are far below Vascepa’s potency.

Vascepa sales brought Amarin just $287 million over 2019’s first nine months, but analysts forecast the much-broader approval could boost annual sales to $3 billion or more.

Follow NBC HEALTH on Twitter & Facebook.Associated Press

© 2019 NBC UNIVERSAL

  •  
  •  
  •  

 

SpaceNews.com

Musk plans human Mars missions as soon as 2024

by Jeff Foust — June 2, 2016

Elon Musk, seen here speaking at a 2015 conference, said June 1 his upcoming Mars mission plan would allow for the first human Mars mission as soon as 2024. Credit: CASIS video still

BROOMFIELD, Colo. — A Mars mission architecture SpaceX Chief Executive Elon Musk will unveil in September will call for a series of missions starting in 2018 leading up to the first crewed mission to the planet in 2024, Musk said June 1.

In an on-stage interview at the Code Conference, run by the technology publication Recode in Rancho Palos Verdes, California, Musk repeated earlier comments that he would announce his architecture for human missions to Mars in September at the International Astronautical Congress in Guadalajara, Mexico.

That plan would start with the uncrewed launch of a Dragon spacecraft in 2018 on a Mars landing mission dubbed Red Dragon. SpaceX announced April 27 it would fly that mission working in cooperation with NASA, who will provide technical expertise but no funding in exchange for data from the spacecraft’s Mars landing attempt.

“The basic game plan is that we’re going to send a mission to Mars with every Mars opportunity from 2018 onwards,” he said. Launch windows for Mars missions open every 26 months, with the next opening in the spring of 2018.

“We’re establishing cargo flights to Mars that people can count on,” he said. “I think if things go according to plan, we should be able to launch people probably in 2024, with arrival in 2025.”

Musk declined to give additional details about the plan, including the “very big rocket” that would launch the crewed vehicles. “In September I’ll tell you,” he said.

Earlier in the interview, Musk said that SpaceX would attempt to refly a recovered Falcon 9 first stage within the next three months. “We plan to refly one of the landed rocket boosters hopefully in about two or three months,” he said. “We want to start reflying them before the end of the summer.”

Musk didn’t disclose who the customer would be for the first launch of a Falcon 9 with a reused first stage, although company officials recently said a couple of potential customers had expressed interest. SpaceX has now landed four Falcon 9 first stages, although some will be used for ground tests and the one from the first landing, in December 2015, will be put on display outside the company’s Hawthorne, California headquarters.

Musk also said the first Falcon Heavy launch was still scheduled before the end of the year. That launch, he said, would not carry a payload, despite earlier reports that where was some interest from customers in flying on that vehicle.

He also addressed the lengthy development delays of the Falcon Heavy, whose first launch was originally scheduled to take place several years ago. “It’s not like we had a lot of pressing customers who wanted us to launch this,” he said. In fact, at least one company, ViaSat, decided to purchase a launch on an Ariane 5 because of Falcon Heavy delays, reserving its Falcon Heavy contract for a later mission.

Musk also said that SpaceX, which has launched three Falcon 9 missions in less than two months, would maintain a high launch rate. “We’re sort of backlogged on our launches and we’re trying to get them out as quickly as we can,” he said, referring to the June 2015 Falcon 9 launch vehicle that halted launches for nearly six months.

“The launches will take place every two to four weeks. That’s quite a high launch cadence,” he said of the company’s upcoming schedule.

Next year will also see the debut of the Dragon v2, also known as Crew Dragon, that SpaceX is developing for NASA’s commercial crew program. That vehicle will be used for transporting NASA astronauts to and from the International Space Station, and a version of it will also fly Mars missions.

Musk suggested he might fly on a Dragon vehicle in several years. “I think I will at some point,” he said when asked if he planned to fly in space. “I’ll probably go to orbit in four or five years, something like that.”COMMERCIALLAUNCHELON MUSKFALCON HEAVYMARSPEOPLESPACEX

Related ArticlesSpaceX no longer planning crewed missions on Falcon HeavySpaceX announces plans for Dragon mission to MarsSpaceX to launch Falcon Heavy with two “flight-proven” boosters this yearSpaceX drops plans for powered Dragon landings

Explore SpaceNews.comA weekly intelligence briefing for national security professionalsU.S. Space Command chief Raymond: ‘I’m really excited for the Space Force’NASA approves Dec. 20 Starliner test flightNDAA conference agreement establishes U.S. Space Force, directs major overhaul of space acquisitionsPowered by
Search

Most Read

  1. Northrop Grumman wins competition to build future ICBM, by default

Upcoming Events

See AllarrowJAN29

The 23rd Annual FAA Commercial Space Transportation Conference

JAN30

Mobile Deployable Communications 2020

FEB04

SmallSat Symposium 2020

SpaceNews

2019 SPACENEWS, INC. ALL RIGHTS RESERVED

SpaceNews.com

Musk plans human Mars missions as soon as 2024

by Jeff Foust — June 2, 2016

Elon Musk, seen here speaking at a 2015 conference, said June 1 his upcoming Mars mission plan would allow for the first human Mars mission as soon as 2024. Credit: CASIS video still

BROOMFIELD, Colo. — A Mars mission architecture SpaceX Chief Executive Elon Musk will unveil in September will call for a series of missions starting in 2018 leading up to the first crewed mission to the planet in 2024, Musk said June 1.

In an on-stage interview at the Code Conference, run by the technology publication Recode in Rancho Palos Verdes, California, Musk repeated earlier comments that he would announce his architecture for human missions to Mars in September at the International Astronautical Congress in Guadalajara, Mexico.

That plan would start with the uncrewed launch of a Dragon spacecraft in 2018 on a Mars landing mission dubbed Red Dragon. SpaceX announced April 27 it would fly that mission working in cooperation with NASA, who will provide technical expertise but no funding in exchange for data from the spacecraft’s Mars landing attempt.

“The basic game plan is that we’re going to send a mission to Mars with every Mars opportunity from 2018 onwards,” he said. Launch windows for Mars missions open every 26 months, with the next opening in the spring of 2018.

“We’re establishing cargo flights to Mars that people can count on,” he said. “I think if things go according to plan, we should be able to launch people probably in 2024, with arrival in 2025.”

Musk declined to give additional details about the plan, including the “very big rocket” that would launch the crewed vehicles. “In September I’ll tell you,” he said.

Earlier in the interview, Musk said that SpaceX would attempt to refly a recovered Falcon 9 first stage within the next three months. “We plan to refly one of the landed rocket boosters hopefully in about two or three months,” he said. “We want to start reflying them before the end of the summer.”

Musk didn’t disclose who the customer would be for the first launch of a Falcon 9 with a reused first stage, although company officials recently said a couple of potential customers had expressed interest. SpaceX has now landed four Falcon 9 first stages, although some will be used for ground tests and the one from the first landing, in December 2015, will be put on display outside the company’s Hawthorne, California headquarters.

Musk also said the first Falcon Heavy launch was still scheduled before the end of the year. That launch, he said, would not carry a payload, despite earlier reports that where was some interest from customers in flying on that vehicle.

He also addressed the lengthy development delays of the Falcon Heavy, whose first launch was originally scheduled to take place several years ago. “It’s not like we had a lot of pressing customers who wanted us to launch this,” he said. In fact, at least one company, ViaSat, decided to purchase a launch on an Ariane 5 because of Falcon Heavy delays, reserving its Falcon Heavy contract for a later mission.

Musk also said that SpaceX, which has launched three Falcon 9 missions in less than two months, would maintain a high launch rate. “We’re sort of backlogged on our launches and we’re trying to get them out as quickly as we can,” he said, referring to the June 2015 Falcon 9 launch vehicle that halted launches for nearly six months.

“The launches will take place every two to four weeks. That’s quite a high launch cadence,” he said of the company’s upcoming schedule.

Next year will also see the debut of the Dragon v2, also known as Crew Dragon, that SpaceX is developing for NASA’s commercial crew program. That vehicle will be used for transporting NASA astronauts to and from the International Space Station, and a version of it will also fly Mars missions.

Musk suggested he might fly on a Dragon vehicle in several years. “I think I will at some point,” he said when asked if he planned to fly in space. “I’ll probably go to orbit in four or five years, something like that.”COMMERCIALLAUNCHELON MUSKFALCON HEAVYMARSPEOPLESPACEX

Related ArticlesSpaceX no longer planning crewed missions on Falcon HeavySpaceX announces plans for Dragon mission to MarsSpaceX to launch Falcon Heavy with two “flight-proven” boosters this yearSpaceX drops plans for powered Dragon landings

Explore SpaceNews.comA weekly intelligence briefing for national security professionalsU.S. Space Command chief Raymond: ‘I’m really excited for the Space Force’NASA approves Dec. 20 Starliner test flightNDAA conference agreement establishes U.S. Space Force, directs major overhaul of space acquisitionsPowered by
Search

Most Read

  1. Northrop Grumman wins competition to build future ICBM, by default

Upcoming Events

See AllarrowJAN29

The 23rd Annual FAA Commercial Space Transportation Conference

JAN30

Mobile Deployable Communications 2020

FEB04

SmallSat Symposium 2020

SpaceNews

2019 SPACENEWS, INC. ALL RIGHTS RESERVED

HOMENEWSOPINIONVIDEOLAUNCHCIVILCOMMERCIALMILITARYPOLICY & POLITICS Musk plans human Mars missions as soon as 2024 by Jeff Foust — June 2, 2016 Elon Musk, seen here speaking at a 2015 conference, said June 1 his upcoming Mars mission plan would allow for the first human Mars mission as soon as 2024. Credit: CASIS video still BROOMFIELD, Colo. — A Mars mission architecture SpaceX Chief Executive Elon Musk will unveil in September will call for a series of missions starting in 2018 leading up to the first crewed mission to the planet in 2024, Musk said June 1. In an on-stage interview at the Code Conference, run by the technology publication Recode in Rancho Palos Verdes, California, Musk repeated earlier comments that he would announce his architecture for human missions to Mars in September at the International Astronautical Congress in Guadalajara, Mexico. That plan would start with the uncrewed launch of a Dragon spacecraft in 2018 on a Mars landing mission dubbed Red Dragon. SpaceX announced April 27 it would fly that mission working in cooperation with NASA, who will provide technical expertise but no funding in exchange for data from the spacecraft’s Mars landing attempt. “The basic game plan is that we’re going to send a mission to Mars with every Mars opportunity from 2018 onwards,” he said. Launch windows for Mars missions open every 26 months, with the next opening in the spring of 2018. “We’re establishing cargo flights to Mars that people can count on,” he said. “I think if things go according to plan, we should be able to launch people probably in 2024, with arrival in 2025.” Musk declined to give additional details about the plan, including the “very big rocket” that would launch the crewed vehicles. “In September I’ll tell you,” he said. Earlier in the interview, Musk said that SpaceX would attempt to refly a recovered Falcon 9 first stage within the next three months. “We plan to refly one of the landed rocket boosters hopefully in about two or three months,” he said. “We want to start reflying them before the end of the summer.” Musk didn’t disclose who the customer would be for the first launch of a Falcon 9 with a reused first stage, although company officials recently said a couple of potential customers had expressed interest. SpaceX has now landed four Falcon 9 first stages, although some will be used for ground tests and the one from the first landing, in December 2015, will be put on display outside the company’s Hawthorne, California headquarters. Musk also said the first Falcon Heavy launch was still scheduled before the end of the year. That launch, he said, would not carry a payload, despite earlier reports that where was some interest from customers in flying on that vehicle. He also addressed the lengthy development delays of the Falcon Heavy, whose first launch was originally scheduled to take place several years ago. “It’s not like we had a lot of pressing customers who wanted us to launch this,” he said. In fact, at least one company, ViaSat, decided to purchase a launch on an Ariane 5 because of Falcon Heavy delays, reserving its Falcon Heavy contract for a later mission. Musk also said that SpaceX, which has launched three Falcon 9 missions in less than two months, would maintain a high launch rate. “We’re sort of backlogged on our launches and we’re trying to get them out as quickly as we can,” he said, referring to the June 2015 Falcon 9 launch vehicle that halted launches for nearly six months. “The launches will take place every two to four weeks. That’s quite a high launch cadence,” he said of the company’s upcoming schedule. Next year will also see the debut of the Dragon v2, also known as Crew Dragon, that SpaceX is developing for NASA’s commercial crew program. That vehicle will be used for transporting NASA astronauts to and from the International Space Station, and a version of it will also fly Mars missions. Musk suggested he might fly on a Dragon vehicle in several years. “I think I will at some point,” he said when asked if he planned to fly in space. “I’ll probably go to orbit in four or five years, something like that.” COMMERCIAL LAUNCH ELON MUSKFALCON HEAVYMARSPEOPLESPACEX Related Articles SpaceX no longer planning crewed missions on Falcon Heavy SpaceX announces plans for Dragon mission to Mars SpaceX to launch Falcon Heavy with two “flight-proven” boosters this year SpaceX drops plans for powered Dragon landings Explore SpaceNews.com A weekly intelligence briefing for national security professionals U.S. Space Command chief Raymond: ‘I’m really excited for the Space Force’ NASA approves Dec. 20 Starliner test flight NDAA conference agreement establishes U.S. Space Force, directs major overhaul of space acquisitions Powered by Search Most Read Northrop Grumman wins competition to build future ICBM, by default Upcoming EventsSee Allarrow JAN 29 The 23rd Annual FAA Commercial Space Transportation Conference JAN 30 Mobile Deployable Communications 2020 FEB 04 SmallSat Symposium 2020 SpaceNews2019 SPACENEWS, INC. ALL RIGHTS RESERVEDAboutMagazine SubscriptionAdvertiseNewslettersEventsPrivacy PolicyReprintsfacebooktwitteryoutuberss HOMENEWSOPINIONVIDEOLAUNCHCIVILCOMMERCIALMILITARYPOLICY & POLITICS Musk plans human Mars missions as soon as 2024 by Jeff Foust — June 2, 2016 Elon Musk, seen here speaking at a 2015 conference, said June 1 his upcoming Mars mission plan would allow for the first human Mars mission as soon as 2024. Credit: CASIS video still BROOMFIELD, Colo. — A Mars mission architecture SpaceX Chief Executive Elon Musk will unveil in September will call for a series of missions starting in 2018 leading up to the first crewed mission to the planet in 2024, Musk said June 1. In an on-stage interview at the Code Conference, run by the technology publication Recode in Rancho Palos Verdes, California, Musk repeated earlier comments that he would announce his architecture for human missions to Mars in September at the International Astronautical Congress in Guadalajara, Mexico. That plan would start with the uncrewed launch of a Dragon spacecraft in 2018 on a Mars landing mission dubbed Red Dragon. SpaceX announced April 27 it would fly that mission working in cooperation with NASA, who will provide technical expertise but no funding in exchange for data from the spacecraft’s Mars landing attempt. “The basic game plan is that we’re going to send a mission to Mars with every Mars opportunity from 2018 onwards,” he said. Launch windows for Mars missions open every 26 months, with the next opening in the spring of 2018. “We’re establishing cargo flights to Mars that people can count on,” he said. “I think if things go according to plan, we should be able to launch people probably in 2024, with arrival in 2025.” Musk declined to give additional details about the plan, including the “very big rocket” that would launch the crewed vehicles. “In September I’ll tell you,” he said. Earlier in the interview, Musk said that SpaceX would attempt to refly a recovered Falcon 9 first stage within the next three months. “We plan to refly one of the landed rocket boosters hopefully in about two or three months,” he said. “We want to start reflying them before the end of the summer.” Musk didn’t disclose who the customer would be for the first launch of a Falcon 9 with a reused first stage, although company officials recently said a couple of potential customers had expressed interest. SpaceX has now landed four Falcon 9 first stages, although some will be used for ground tests and the one from the first landing, in December 2015, will be put on display outside the company’s Hawthorne, California headquarters. Musk also said the first Falcon Heavy launch was still scheduled before the end of the year. That launch, he said, would not carry a payload, despite earlier reports that where was some interest from customers in flying on that vehicle. He also addressed the lengthy development delays of the Falcon Heavy, whose first launch was originally scheduled to take place several years ago. “It’s not like we had a lot of pressing customers who wanted us to launch this,” he said. In fact, at least one company, ViaSat, decided to purchase a launch on an Ariane 5 because of Falcon Heavy delays, reserving its Falcon Heavy contract for a later mission. Musk also said that SpaceX, which has launched three Falcon 9 missions in less than two months, would maintain a high launch rate. “We’re sort of backlogged on our launches and we’re trying to get them out as quickly as we can,” he said, referring to the June 2015 Falcon 9 launch vehicle that halted launches for nearly six months. “The launches will take place every two to four weeks. That’s quite a high launch cadence,” he said of the company’s upcoming schedule. Next year will also see the debut of the Dragon v2, also known as Crew Dragon, that SpaceX is developing for NASA’s commercial crew program. That vehicle will be used for transporting NASA astronauts to and from the International Space Station, and a version of it will also fly Mars missions. Musk suggested he might fly on a Dragon vehicle in several years. “I think I will at some point,” he said when asked if he planned to fly in space. “I’ll probably go to orbit in four or five years, something like that.” COMMERCIAL LAUNCH ELON MUSKFALCON HEAVYMARSPEOPLESPACEX Related Articles SpaceX no longer planning crewed missions on Falcon Heavy SpaceX announces plans for Dragon mission to Mars SpaceX to launch Falcon Heavy with two “flight-proven” boosters this year SpaceX drops plans for powered Dragon landings Explore SpaceNews.com A weekly intelligence briefing for national security professionals U.S. Space Command chief Raymond: ‘I’m really excited for the Space Force’ NASA approves Dec. 20 Starliner test flight NDAA conference agreement establishes U.S. Space Force, directs major overhaul of space acquisitions Powered by Search Most Read Northrop Grumman wins competition to build future ICBM, by default Upcoming EventsSee Allarrow JAN 29 The 23rd Annual FAA Commercial Space Transportation Conference JAN 30 Mobile Deployable Communications 2020 FEB 04 SmallSat Symposium 2020 SpaceNews2019 SPACENEWS, INC. ALL RIGHTS RESERVEDAboutMagazine SubscriptionAdvertiseNewslettersEventsPrivacy PolicyReprintsfacebooktwitteryoutuberss

 

Switch to White

Blog

Dec 11, 2019

Intel says this breakthrough will make quantum computing more practical

Posted by Klaus Baldauf in categories: computingquantum physics

Zoom

Though trailing quantum rivals like Google and IBM, Intel thinks it can win the long war through something it’s always been great at: miniaturization.

[Photo: courtesy of Intel].Read more

0 comments

Leave a reply

 Name (required) Email (will not be published) (required) Website
Submit comment

 Log in for authorized contributors.

  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  

Tag cloud

agingAIAlzheimer’santi-agingbioquantinebioquarkbiotechbiotechnologybitcoinblockchainbrain deathcancercryptocurrencycultureDeathdesignexistential risksextinctionfuturefuturismGoogleHarry J. BenthamhealthhealthspanhumanityimmortalityInterstellar Travelira pastorLife extensionlifespanlongevityNASANeurosciencepoliticsreanimaregenerageregenerationresearchriskssingularityspacesustainabilitytechnologytranshumanismwellness

Categories

Top 10 Authors

show all

Blogroll

© 2002–2019 Lifeboat Foundation

Phys.org

Topics

 

Science X Account

Remember meSign In

Click here to sign in with  or 

Forget Password?Not a member? Sign up

Learn more

  1. Home
  2.  Biology
  3.  Evolution
  •  
  •  
  •  

 

DECEMBER 10, 2019

Oxygen shaped the evolution of the eye

by Aarhus University

Oxygen shaped the evolution of the eye
Vascular networks in the retina of a goldfish. The retinal vasculature is divided into the separate layers. Capillaries on the outer side of the retina (red and pink), capillaries on the inner side of the retina (purple and blue), and capillaries inside the retina (not found in the goldfish). For more details, see the interactive model of the goldfish vasculature on Supplementary File 4 https://doi.org/10.7554/eLife.52153 – Supplementary File 4. Credit: Henrik Lauridsen, AU

Convergent origins of new mechanisms to supply oxygen to the retina were directly linked to concurrent enhancements in the functional anatomy of the eye.

In his On the Origin of Species, Darwin used the complexity of the eye to argue his theory of natural selection, and the eye has continued to fascinate and trouble evolutionary biologists ever since.

In a paper published today in eLife, researchers from Aarhus University teamed up with scientists from eight international institutions to explore the physiological requirements for the evolution of improved eyesight.

They argue that the evolution of high-acuity vision in ancestral animals was constrained by the ability to deliver sufficient amounts of oxygen to cells in the retina. Their study uncovered a fascinating pattern of mechanisms to improve retinal oxygen supply capacity that evolved in concert with enhanced retinal morphology to improve vision. The model fits across all bony vertebrates from fish through to birds and mammals. These findings add an additional component to our understanding of the evolution of the eye, which has fascinated and troubled evolutionary biologists for centuries.

The rises and falls of retinal oxygen supply

The study took advantage of the diversity in the physiology and anatomy among eyes from 87 animal species, including fishes, amphibians and mammals. By placing these species on the tree of life, the authors unravelled the evolutionary history of the eye from a 425 million-year-old extinct ancestor of modern vertebrates to current day animals. They identified three distinct physiological mechanisms for retinal oxygen supply that are always associated with improved vision. Thus, in fishes, mutations in haemoglobin were associated with the ability to deliver oxygen to the retina at exceptionally high oxygen partial pressures to overcome the significant diffusion distance to the retinal cells.

The authors show that the origin of this mechanism around 280 million years ago was associated with a dramatic increase in eye size and retinal thickness that directly links to improved light sensitivity and spatial resolution. This mechanism in hemoglobin was subsequently lost several times, possibly to avoid oxidative damage and gas bubble formation in the eye.

Oxygen shaped the evolution of the eye
The evolution of the size of the eye (A) and retina (B). The evolution of structures to supplement retinal oxygen supply to tightly coupled to the evolution of large eyes and a thick retina. The pectens oculi is a vascular structure found in the eyes of birds, the choroid rete mirabile is a gas-gland found in the eyes of fishes, and intra-retinal capillaries are found in some mammals, including humans. Credit: Christian Damsgaard, AU

Warm blooded dinosaurs shaped the vision of mammals

The authors show that increased reliance on vision in mammals was associated with the evolution of capillary beds inside the retina despite the potential trade-off to visual acuity imposed by the bending of light by red blood cells.

Retinal capillaries in mammals originated around 100 million years ago when dinosaurs evolved endothermy. Endothermy allowed these Mesozoic dinosaurs to hunt at night, which forced the previously nocturnal mammals into a diurnal lifestyle with an increased reliance of vision.

The new model on eye evolution shows that the evolution of intra-retinal capillaries coincided precisely with the improvements in vision around 100 million years ago. Further, it shows that some mammals lost retinal capillaries when they became less reliant on vision (e.g., echolocating bat).

Oxygen and vision go hand in hand

Overall, this analysis shows that the functional morphology of the eye has changed dynamically throughout animal evolution. It shows that eye morphology goes hand in hand with parallel changes in retinal oxygen supply, and they are likely driven by different tradeoffs to retinal oxygen supply. These tradeoffs appear acceptable in place of the improved visual acuity available when the thickness of the retina was allowed to increase.

Overall, this study shows that adaptations to ensure oxygen delivery to the retina were a physiological prerequisite for the functional evolution of the eye.


Explore furtherTracing the evolution of vision in fruit flies


More information: Christian Damsgaard et al, Retinal oxygen supply shaped the functional evolution of the vertebrate eye, eLife (2019). DOI: 10.7554/eLife.52153Journal information:eLifeProvided by Aarhus University148 shares

Feedback to editors

Researchers estimate the mass of the Milky Way to be 890 billion times that of our sun

DEC 13, 2019

3

Astronomers discover two new galaxy protoclusters

DEC 13, 2019

0

Does tapping your can of beer really keep it from fizzing all over you?

DEC 12, 2019

19

Comparison of climate simulations with proxies suggests Arctic sea ice could vanish in summer sooner than expected

DEC 12, 2019

27

Best of Last Year: The top Phys.org articles of 2019

DEC 12, 2019

4


How does our Milky Way galaxy get its spiral form?8 HOURS AGODeforestation in Brazil’s Amazon up by more than double: data8 HOURS AGOMitochondria are the ‘canary in the coal mine’ for cellular stress16 HOURS AGOSavannah monitor lizards have a unique airflow pattern that is a hybrid of bird and mammal flow patterns16 HOURS AGOWhy are giant pandas born so tiny?DEC 13, 2019Nanoscience breakthrough: Probing particles smaller than a billionth of a meterDEC 13, 2019Leaving home is beneficial for male squirrels but not for females, study showsDEC 13, 2019Newfound Martian aurora actually the most common; sheds light on Mars’ changing climateDEC 13, 2019Bone bandage soaks up pro-healing biochemical to accelerate repairDEC 13, 2019A self-cleaning surface that repels even the deadliest superbugsDEC 13, 2019


Relevant PhysicsForums posts

Sodium Potassium Pump’s role in Breathing?

DEC 13, 2019

Dinosaur tail found preserved in amber, what can we learn?

DEC 11, 2019

Stem cell injections are a step toward improving motor, sensory function after spinal cord injury

DEC 11, 2019

Are Essential Oils Effective?

DEC 10, 2019

Do plants respond to low frequency electromagnetic waves?

DEC 05, 2019

Question about neuroscience and psychology

DEC 05, 2019

More from Biology and Medical


User comments

Medical XpressMedical Xpress covers all medical research advances and health newsTech XploreTech Xplore covers the latest engineering, electronics and technology advancesScienceXScience X Network offers the most comprehensive sci-tech news coverage on the web

Newsletters

SubscribeScience X Daily and the Weekly Email Newsletter are free features that allow you to receive your favorite sci-tech news updates in your email inbox

Follow us

  •  
  •  
  •  
  •  

© Phys.org 2003 – 2019 powered by Science X NetworkPrivacy policyTerms of use

Skip to main contentSkip to article

  •  
  •  

RegisterSign inDownload PDFShareExportAdvanced

Outline

  1. Abstract
  2. Keywords
  3. 1. Introduction
  4. 2. Patients and methods
  5. 3. Results
  6. 4. Discussion
  7. 5. Conclusion
  8. Data sharing
  9. Declaration of Competing Interest
  10. Acknowledgments
  11. Funding source
  12. References

Show full outline

Figures (4)

  1.  
  2.  
  3.  
  4.  

Tables (3)

  1. Table 1
  2. Table 2
  3. Table 3
EBioMedicine

Available online 3 December 2019In Press, Corrected ProofWhat are Corrected Proof articles?

Journal home page for EBioMedicine

Research paperPhase I dose-escalation study to determine the safety, tolerability, preliminary efficacy and pharmacokinetics of an intratumoral injection of tigilanol tiglate (EBC-46)

Author links open overlay panelBenedict J.PanizzaaPaulde SouzabAdamCooperbAflahRoohullahbChristos S.KarapetiscJason D.LickliterdShow morehttps://doi.org/10.1016/j.ebiom.2019.11.037Get rights and contentUnder a Creative Commons licenseopen access

Abstract

Background

Tigilanol tiglate, a short-chain diterpene ester, is being developed as intratumoral treatment of a broad range of cancers. We conducted the first-in-human study of intratumoral tigilanol tiglate in patients with solid tumors.

Methods

Tigilanol tiglate was administered in a multicentre, non randomized, single-arm study, with escalating doses beginning with 0·06 mg/m2 in tumors estimated to be at least twice the volume of injection (dose-escalation cohorts). Patients with smaller tumors were assigned to the local effects cohort and received the appropriate dose for tumor size.

Findings

Twenty-two patients were enrolled. The maximum dose was 3·6 mg/m2 and the maximum tolerated dose was not reached. There was one report of dose-limiting toxicity (upper airway obstruction), two serious adverse events (upper airway obstruction and septicemia), 160 treatment-emergent adverse events, and no deaths. Injection site reactions in all tumors and tumor types occurred even at the lowest dose. Six of the 22 patients experienced a treatment response, with four of the six patients achieving complete response.

Interpretation

Intratumoral tigilanol tiglate was generally well tolerated, the maximum tolerated dose was not reached, and clinical activity was observed in 9 tumor types including complete response in four patients. These results support the continued development of tigilanol tiglate for intratumoral administration.

Funding

QBiotics Group Limited Brisbane, Queensland, Australia was the sponsor of the study.

Keywords

Diterpene esterEBC-46IntratumoralProtein kinase CTigilanol tiglate

Research in Context

Evidence before this study

Intratumoral injection therapy for cancer currently remains a topic of intense interest, even though current clinical research is heavily focused on immunotherapy. As of January 2019, the PubMed database lists 3671 references under the search term “intratumoral injection”. Injection of antineoplastic agents directly into a tumor not only reduces systemic exposure, minimizes off-target toxicity, and limits the total amount of drug used but also induces robust antitumor activity in the injected lesion and, potentially, in noncontiguous non-injected lesions. Tigilanol tiglate possesses antitumor activity and appears to be effective and well tolerated when injected intralesionally as an alternative to surgery for canine mast cell tumors and soft tissue sarcomas in veterinary settings. Studies in syngeneic and xenograft mouse models showed that intratumoral injection of tigilanol tiglate into subcutaneous tumors resulted in PKC-dependent hemorrhagic necrosis within 24 h, complete loss of viable tumor cells, and marked vascular disruption at 24 h after treatment.

Added value of this study

Although surgery and radiotherapy constitute the great majority of local therapies for tumors, their application and effectiveness can be limited by many factors such as the overall status of the patient, proximity and/or infiltration of tumors into adjacent tissues, tumor inaccessibility, large tumour volume, intolerance of normal tissue to repeated courses of treatment, the presence of metastases and the availability of local facilities in developing nations. As a consequence, better local therapies are still needed for a wide range of tumors to reach the expanding network of infiltrating malignant cells that can be missed by surgery, to spare nearby normal tissue that would be damaged by radiation, and to control tumors that are otherwise untreatable. This study contributes to the body of knowledge supporting the utility of intratumoral injection as a component of local anticancer therapy and specifically to the role of PKC activation in solid tumor treatment.

Implications of all the available evidence

This first-in-human, dose-escalation, clinical study of the novel small molecule tigilanol tiglate administered intratumorally to patients with a range of cutaneous, subcutaneous, or nodal tumors showed that intratumoral administration of tigilanol tiglate is generally well tolerated, a maximum tolerated dose was not declared, and there were preliminary signs of efficacy. These results support the continued development of tigilanol tiglate for intratumoral administration. Future studies could include dosing levels based on target tumor volume ranges. To elucidate further the potential clinical usefulness of intratumoral therapy generally and tigilanol tiglate particularly, longer follow-up periods (for wound healing and efficacy assessments) and volumetric dosing assessments on treatment day (baseline) instead of at screening for RECIST response evaluations should be considered.

1. Introduction

Surgery and radiotherapy remain the conventional approaches to local treatment of malignant tumors, but these modalities can be limited by the location, accessibility, and size of the tumor and availability of medical facilities. Intratumoral (IT) administration of anti-neoplastic agents has been a treatment option for many years [1] and represents an alternative to surgery and radiotherapy in patients with localized accessible tumors, providing high drug concentrations at the tumor site with minimal exposure of non-target tissues [2]. A number of agents across different drug classes are being studied in the setting of IT administration [3][4][5].

Tigilanol tiglate (EBC-46) is a novel short-chain diterpene ester derived from the seeds of the native Australian blushwood tree (Fontainea picrosperma[6] and is currently in development for the local treatment of a broad range of tumors [[7][8][9]].

Tigilanol tiglate induces a respiratory burst from human polymorphonuclear leukocytes [7] and when injected directly into a tumor, increases vascular endothelial permeability, provokes mitochondrial swelling and plasma membrane destruction in tumor cells, inhibits the growth and induces cell death of a number of human tumor cell lines [9]. It also induces a transcriptional profile with the characteristics of a Th1 immune response, suggesting an immunomodulatory effect that may play a role in tumor regression [10].

Tigilanol tiglate is a potent activator of protein kinase C (PKC) [9], which comprises a family of enzymes that induce changes in signal transduction pathways modulating diverse cellular responses including cell replication [[11][12][13][14]]. Recent clinical data support a tumor suppressive effect for PKC [15,16], although earlier studies suggested an oncogenic role [17].

Preclinical studies employing murine xenograft models showed that a single IT injection of tigilanol tiglate produced vascular disruption, hemorrhagic necrosis, an acute highly localized inflammatory response, rapid tumor cell death and regression of solid tumors [7,9,10,18]. In addition, numerous veterinary clinical studies have demonstrated that tigilanol tiglate administered IT was effective against neoplasms such as cutaneous mast cell tumors and soft tissue sarcomas [8,18,19]. Animal toxicity studies established that tigilanol tiglate has an acceptable safety profile, producing significantly greater local responses (erythema, edema, eschar formation) following IT injection compared with injection into normal skin.

We report the first-in-human study of IT tigilanol tiglate in patients with solid tumors. The results underpin the next phase in the development of tigilanol tiglate for the intratumoral treatment of solid tumors.

2. Patients and methods

2.1. Patient population

This was a Phase I, open-label, multicenter (four sites in Australia), single-arm, non-randomized, dose-escalation study of IT tigilanol tiglate in patients with accessible cutaneous, subcutaneous or nodal tumors refractory to conventional therapy. Eligible patients were >18 years with an ECOG performance status of 0 to 2 [20], a life expectancy >12 weeks, and measurable disease. Patients were not enrolled if they received any treatment within 3 weeks (or 6 weeks for nitrosoureas or mitomycin C) of study treatment, had uncontrolled CNS metastases, or were at increased risk for bleeding, including patients on anticoagulation. Pregnant or nursing females and patients considered to be inappropriate candidates for the study also were excluded. The study was approved by the Institutional Review Boards and Independent Ethics Committees at each participating site, and written consent by patients was required.

2.2. Study design

The primary objective was to establish the safety, tolerability and maximum tolerated dose (MTD) of IT tigilanol tiglate. Secondary objectives were to evaluate the preliminary efficacy of tigilanol tiglate and determine its pharmacokinetics (PK). The exploratory objective was to characterize the pharmacodynamics of tigilanol tiglate through analysis of post-administration blood and tumor tissue. Tigilanol tiglate was dissolved in 100% propylene glycol and mixed 4:6 with 30 mM sodium acetate buffer (pH 4·2) to provide stability and solubility and was provided in 2 mL vials containing 1·5 mg/mL or 2 mg/mL. Vials of diluent (40% propylene glycol in 30 mM acetate buffer) were supplied to allow preparation of the appropriate concentration.

Patients received tigilanol tiglate via direct bolus injection(s) into no more than 3 selected superficial tumors on Day 1. The total administered volume of the solution was determined by body surface area (BSA) using the formula Volume = (BSA x Dose Level)/Concentration of Drug, where Volume is in mL, BSA is in m2, Dose Level is in mg/m2, and Concentration of Drug is in mg/mL [21]. The solution was injected into a volume of tumor estimated to be twice the volume of the injected solution (e.g., 1 mL tigilanol tiglate into 2 cm3 of tumor). Where tumors were larger than that required for the dose, a section of the tumor was injected. When multiple tumors were treated, the dose was divided in proportion to the target volume of each tumor. The dose was administered using a minimal number of injections in a fanning manner to spread the dose evenly throughout the tumor. After assessments over 24 h, patients were discharged from the study site on Day 2 and returned for follow-up on Days 3, 5, 8, 15, and 22 and, if wound healing or stabilization did not occur by Day 22, every 7 days thereafter until full healing or stabilization was achieved.

The study was divided into Stages 1 and 2 (Fig. 1:A and B). Stage 1 was conducted to establish the safety and tolerability of escalating doses and concentrations of tigilanol tiglate in single-patient cohorts until a severe treatment-emergent adverse event (TEAE) or dose-limiting toxicity (DLT) occurred, or the Safety Review Committee (SRC) determined the next stage should be commenced. TEAEs were defined as AEs that commenced at or after the start of tigilanol tiglate administration. Transition from Stage 1 to Stage 2 was to occur after Cohort 2. Stage 2 was conducted to determine the safety and tolerability of dose levels in cohorts of at least three patients using the conventional 3 + 3 design (minimum of three patients per dose cohort, with the potential to add an additional three patients to a cohort based on the incidence of DLTs) [22]. Following Cohort 3, the protocol was amended to revert the enrollment back to single-patient cohorts (Stage 1) to reduce the number of patients exposed to doses that were not likely to be therapeutically relevant. Escalation to the next dose was then planned to continue until the MTD was reached or the SRC or the sponsor determined that dose escalation should be terminated. A maximum tigilanol tiglate concentration of 1·5 mg/mL was used for Stage 2, with any subsequent dose escalation achieved by increases in tigilanol tiglate volume. The maximum dose used in this study was 3·60 mg/m2.

Fig 1

2.3. Safety and efficacy evaluations

The safety population comprised all enrolled patients who received any dose of tigilanol tiglate, including those patients who did not complete the study. Routine clinical and laboratory assessments including hematology and biochemistry, and assessment of other safety variables including injection site reactions, wound healing, and AEs were performed on Days 1 and 2 and at each follow-up visit through Day 22 with the exceptions of hematology and biochemistry (Days 1, 2, 8, 15 and 22) and wound healing (from Day 8 onwards). The injection site reaction assessed pain, erythema and swelling of the skin or mucosa limited to the injection site and was graded as 1 if ≤ 3 cm adjacent to the tumor borders, grade 2 if > 3 cm and ≤ 6 cm, grade 3 if > 6 cm and grade 4 if it was life threatening, chronically disabling or a hemorrhage. The efficacy population was defined as all enrolled patients that received any dose of tigilanol tiglate and had at least one post-baseline tumor response assessment. To assess efficacy, target tumors were measured at baseline and on Day 22 using both calipers and computed tomographic (CT) scans, and RECIST 1·123 criteria then were applied to assess anti-tumor responses. Some patients had efficacy assessments beyond Day 22.

2.4. Pharmacokinetics

The PK population consisted of all enrolled patients who received any dose of tigilanol tiglate and had an evaluable plasma concentration profile. Blood for PK and biomarker analysis was collected within 30 min prior to dosing and then 5, 15, and 30 min and 1 h, 2 h, 4 h, 6 h, 8 h and 24 h after dosing. Plasma samples were assayed for tigilanol tiglate maximum observed concentration (Cmax), time of Cmax (Tmax), area under the plasma concentration-time curve from time zero to the last quantifiable sampling point post-dose, area under the plasma concentration-time curve extrapolated to infinity (AUC0-∞), elimination half-life, and systemic clearance in accordance with CPR analytical laboratory method ALM-084. PK parameters were determined using Phoenix WinNonlin version 7·0 (Pharsight Corporation, USA).

2.5. Statistical analysis

The currently supported version of SAS Software (Version 9·4) was used to perform all data analyses.

Continuous variables were summarized using the statistical mean, median, standard deviation, minimum and maximum. Mean with standard deviation, and median with interquartile range, were presented to one more decimal place. Categorical variables were summarized with frequency counts and percentages. Percentages were rounded to one decimal place, with the relevant patient population being the denominator. Only basic descriptive statistics were performed.

The sample size for this study had been selected without performing a power calculation to provide descriptive information on safety, tolerability, and PK following administration of tigilanol tiglate and was done to minimize the number of patients exposed to potentially sub-therapeutic levels of the drug.

3. Results

3.1. Patient characteristics

Of the 22 patients enrolled in the study, 15 (68%) were male and seven (32%) were female; the median age was 64 years (range: 31 to 86 years); and 21 were Caucasian and one was Asian. At enrolment, seven patients had AJCC stage IV disease (32%), four had stage III (18%), five had stage II (23%) and two had stage I (9%); disease stage was not known and/or not recorded for four patients (18%). The first patient was consented and screened on 13 Feb 2015 and the last patient consented and screened on 30 May 2017. The last follow-up assessment (Patient 206 – Day 64) was on 30 Jun 2017. A total of 29 tumors representing nine tumor types were treated in the 22 patients; squamous cell carcinoma was present in ten patients, followed by melanoma in three, basal cell carcinoma and breast adenocarcinoma in two each, and single cases of atypical fibroxanthoma, atypical myxoid fibrosarcoma, metastatic colorectal adenocarcinoma, adenoid cystic carcinoma and angiosarcoma. Seventeen patients had single tumors treated, three had two tumors treated, and two had three tumors treated. Thirteen patients (59%) had at least one antecedent oncologic surgical procedure and 17 (77%) had received chemotherapy or radiotherapy. Fifteen patients completed the study and seven were deemed to have discontinued (6 after Day 22) as all of their injection site wounds had not healed or stabilized and the investigators felt the patients’ disease progressed to the point where they should be taken off study.

3.2. Dose escalation

Single-patient cohorts 1 and 2 of Stage 1 received 0·06 and 0·12 mg/m2 of tigilanol tiglate, respectively, and no DLTs were observed (Table 1). Per protocol, transition to Stage 2 then proceeded and four patients were dosed in Cohort 3 at 0⋅24 mg/m2; the fourth patient was enrolled because leakage of study drug out of the tumor following IT injection occurred in one patient. Given the satisfactory tolerability of IT tigilanol tiglate in Cohort 3, the protocol was amended to allow resumption of Stage 1 (single-patient cohorts), and this was continued through three dose levels to 2·4 mg/m2. At this dose, a DLT of airway swelling was encountered, which led to transition to a second Stage 2 and expansion of the cohort to a total of four patients (Cohorts 6 and 7). Escalation then proceeded to the 3·6 mg/m2 dose level, which was completed without DLT. Although MTD was not reached in the study, dose escalation in the second Stage 2 was discontinued at the Cohort 8 dose level of 3·6 mg/m2, which was deemed by the sponsor to have provided an appropriate balance of safety and potential efficacy.

Table 1. Tigilanol tiglate dose level by cohort.

StageCohort*Cohort dose level (mg/m2)Tigilanol tiglate concentration (mg/mL)No. of patients
110·060·251
120·120·501
230·241·004
140·601·501
151·201·501
162·401·501
272·401·503
283·601·504
N/ALECVarious1·506

N/A = not applicable.⁎

If no dose-limiting toxicity occurred, the subsequent cohort received the next highest dose.†

One patient experienced leakage of tigilanol tiglate, so an additional patient was included in the cohort.‡

LEC (local effect cohort) included nominal doses of 0·6 mg/m2 (n = 1), 1·2 mg/m2 (n = 4), and 2·4 mg/m2 (n = 1).

3.3. Safety and tolerability

The vast majority of AEs (96%) were mild to moderate: 135 events were assessed as Grade 1, 81 as Grade 2, six as Grade 3 (four reports of injection site pain and single reports of abdominal pain and stridor), and two as Grade 4 (life-threatening upper airway obstruction and sepsis, respectively). There was one DLT and there were two serious AEs, 160 TEAEs, and no deaths. The DLT (upper airway obstruction) occurred in a patient who was treated with 2·4 mg/m2. The two serious AEs were sepsis and upper airway obstruction (also the above DLT). Sepsis due to Streptococcus pyogenes, which was considered possibly related to IT tigilanol tiglate, developed 6 weeks after IT injection and subsequent tumor ulceration in an elderly male with chronic venous insufficiency and an atypical fibroxanthoma on his leg, which eventually healed without evidence of residual tumor. Upper airway obstruction, which developed after subcutaneous infra-auricular/upper neck IT injection, was considered probably related to IT tigilanol tiglate because of altered anatomy including lymphatic drainage after surgery (neck dissection) and radiotherapy (undertaken prior to recruitment to this study) and subsequent parapharyngeal edema necessitating a precautionary tracheostomy. Abdominal pain was considered “not related” to IT tigilanol tiglate. The most common AE was injection site reaction in 12 patients, representing 46% of all treatment-related AEs and 33% of all TEAEs. The observed injection site reactions were Grade 1 for eight patients, Grade 2 for ten patients, and Grade 3 for four patients, and there was an observed dose-response relationship with respect to the frequency and intensity of injection site reactions. Injection site reactions are shown in Fig. 2. TEAEs by nominal dose and toxicity grade are listed in Table 2.

Fig 2

Table 2. Number of TEAEs* by nominal dose level, toxicity grade and CTCAE.

Nominal dose level0·06 mg/m20·12 mg/m20·24 mg/m20·6 mg/m21·2 mg/m22·4 mg/m23·6 mg/m2
 No. of patients1142554
 Toxicity grade234234234234234234234
TEAE                     
Abdominal discomfort1
Agitation1
Anxiety1
Cellulitis2
Chills1
Cold sweat1
Conjunctivitis1
↑ C-reactive protein1
Dyspnea21
Erythema111
Eye irritation2
Eye pain1
Eye swelling1
Feeling hot2
Headache211
Hot flush1
Hypercalcemia1
Hyperglycemia1
Hypertension1
Hyperventilation1
Injection site edema1
Injection site pain251
Injection site reaction3
Injection site swelling1
↑ Lacrimation1
Nasal discomfort1
Neck pain1
Neoplasm progression1
Obstructive airway disorder§1
Edema peripheral1
↓ Oxygen saturation1
Pain32
Pyrexia1
↑ Respiratory rate1
Sepsis1
Skin abrasion1
Skin exfoliation1
Skin ulcer21
Stridor§1
Swelling face1
Tachycardia1
Tremor2
Tumor ulceration1
Vascular disorders2
Wound secretion11

CTCAE = Common Terminology Criteria for Adverse Events [24]

There were 53 events in 12 of the 22 patients (54·5%) considered possibly related to tigilanol tiglate and 107 events in 12 of the 22 patients (54·5%) considered probably related to tigilanol tiglate (160 events in total).§

Obstructive airway disorder and stridor represent the same event in one patient.

3.4. Efficacy

No clear dose-response relationship with respect to efficacy was observed with increasing dose in the dose-escalation cohorts. Best target tumor responses by RECIST 1·1 criteria using caliper measurements from Day 1 are shown in Table 3. Six out of 22 patients had a treated tumor whose injected area responded according to RECIST 1·1. One of 16 patients in the dose-escalation cohort treated with 2·4 mg/m2 experienced a complete response (disappearance of the injected target lesion), clinically confirmed after eventual healing of the injection site ulceration (patient 406); three of the 16 patients experienced partial response, one of whom was treated with 0·6 mg/m2, one with 0·24 mg/m2, and one with 2·4 mg/m2. Ten patients experienced stable disease, one experienced progressive disease, and response was not evaluable in one patient. Review of efficacy data from patients in the local effect cohort (LEC) who received an appropriate dose for tumor size (based on animal data) revealed three of six patients (50%) achieving complete response, three of six (50%) with stable disease, and none with progressive disease

Table 3. Efficacy treatment response.

CohortPatient No.Tumor (location)Nominal/ actual dose (mg/m2)Estimated tumor volume (cm2)Estimated percentage tumor treatedBest RECIST (area of tumor injected) response by calipers from Day 1
Dose-escalation cohorts
1401SCC (back)0·06/0·06501·01%SD
2201SCC (shin)0·12/0·120·987%SD
3101BCC (nose)0·24/0·222·137%SD
3202SCC (lateral to eyes, behind ear)0·24/0·180·2100%PR
    0·3100% 
    0·5100% 
3301Breast AC (chest)0·24/0·242.447%PD
3402SCC (zygoma)0·24/0·157·210%SD
4403SCC (medial canthus)0·6/0·591·4100%PR
5203SCC (inner cheek/palate)1·2/1·1912·021%SD
6405SCC (infra-auricular)2·4/2·3258·210%Not assessable
7406Atypical fibroxanthoma (leg)2·4/2·3813·348%CR
7408SCC (scalp)2·4/2·3929·521%SD
7409SCC (back)2·4/2·427·521%PR
8103Metastatic melanoma (leg)3·6/1·7616·135%SD
8206Metastatic colorectal AC (abdomen)3·6/3·639·025%SD*
8303Myxoid fibrosarcoma (cheek)3·6/3·614·7100%SD
    10·460% 
8410SCC (temporal)3·6/3·4731·926%SD
Local effect cohort
LEC404Melanoma (axilla)0·6/0·470·5100%CR followed by PD
    0·589% 
LEC102Metastatic melanoma (arm)1·2/0·420·7100%CR
    0·476% 
    0·2100% 
LEC407Angiosarcoma (nose)1·2/0·612·1100%CR
LEC204ACC (hard palate)1·2/1·24·477%SD
LEC302Breast AC (breast)1·2/0·230·2100%SD
    0·3100% 
LEC205BCC (nose)2·4/0·843·1100%SD

SCC = squamous cell carcinoma; AC = adenocarcinoma; ACC = adenoid cystic carcinoma; BCC = basal cell carcinoma; CR = complete response; PD = progressive disease; PR = partial response; SD = stable disease.⁎

Patient 206 had no caliper measurements for RECIST so CT scan assessment is provided.†

Anenestic responses: 1) Distant effect for patient 404 on fine needle aspiration-proven contralateral parotid nodal deposit and clinically suspicious leg melanoma. Patient clinically and ultrasound clear at 33 months post-treatment followed by systemic metastases although axillary and parotid nodes remained clear. 2) Local effect for patient 102 on most distal untreated arm lesion. See Fig. 3.

Two patients in the LEC experienced anenestic tumor responses. One of these patients (Patient 404), who had metastatic melanoma in axillary nodal disease, fine needle aspiration-proven contralateral parotid nodal deposit, and a clinically suspicious leg deposit, remained clinically and ultrasonographically clear for 33 months post-treatment but subsequently developed widespread metastatic disease although axillary and parotid nodes remained clear. The other patient (Patient 102) had melanoma with dermal, nodal and pleural metastases. This patient had an earlier local response to radiation therapy for chest wall metastases but developed progressive disease despite receiving four doses of pembrolizumab. Patient 102 experienced a complete response in the three injected cutaneous melanoma metastases on the right upper extremity. Significantly, a fourth cutaneous tumor, which was not injected with study drug (seen inferior to the injected lesions), experienced an anenestic response and completely resolved macroscopically during follow-up. Approximately 4 weeks after injection of the upper extremity lesions, a superficial sternal lesion (biopsy-proven metastatic melanoma) was injected, which also showed a complete response. Of note, CT scans showed anenestic responses in non-injected lymph node and pleural lesions, with complete resolution of an involved 24-mm left axillary node and a 29-mm right pleural nodule, and a reduction in size of an involved right inguinal node. The patient remained well and off treatment until a CT scan performed 14 months after the second tigilanol tiglate injection revealed progressive tumor involving bone and lymph nodes. Patient 407, who had biopsy-proven angiosarcoma of the nasal bridge, had been recommended for total rhinectomy. He achieved a complete response from a single injection of tigilanol tiglate. Three punch biopsies at 12 weeks revealed no residual tumor, and the patient remains disease free on CT scan at 25 months and clinically at 30·5 months post-injection. Fig. 3 shows the changes in tumor appearance with treatment for these four patients.

Fig 3

The median volume of tumors treated in the dose-escalation cohorts was 37% of the total target tumor volume (range: 1% to 100%; average: 50%) compared with a median of 100% (range: 76% to 100%; average: 94%) in the LEC. The average size of tumors in the dose-escalation cohorts was very large (~40 cm3, maximum ~500 cm3) compared with tumors in the LEC (~1·2 cm3, maximum ~4·4 cm3). Leakage from ulcerating tumors occurred following ten injections, with a mean estimated loss of 10% to 20% of the administered volume. Median wound healing time of the treatment site was approximately 30 days post-injection.

3.5. Pharmacokinetics

Individual PK profiles for dose-escalation cohorts and the LEC are shown in Fig. 4. Review of these plots indicates that, within a few minutes of injection, tigilanol tiglate was detected in the plasma (median Tmax = 5 min; range: 0·07 to 2·0 h). Plasma concentration then declined rapidly to low levels within 2 to 4 h post-injection, suggesting that larger tumors could be treated with staged injections. There was no apparent trend for increased half-life with increased dose, with an overall median of 3·64 h (range: 1·55 to 9·42 h). Review of data across the cohorts demonstrated a dose-proportional increase in systemic exposure of tigilanol tiglate, as measured by Cmax, AUC0-t, and AUC0-∞ on Day 1 for doses of 0·06 to 3·6 mg/m2. These PK parameters exhibited an approximately linear relationship with dose across the dosing range using a power model approach.

Fig 4

4. Discussion

This was the first-in-human study to assess the safety and tolerability, efficacy and PK of tigilanol tiglate administered via IT injection to subcutaneous or nodal metastatic lesions in patients with advanced primary or metastatic tumors, using a dose-escalation design. As a first-in-human study, dose levels for the escalation cohorts were necessarily determined by BSA rather than by target tumor volume. As tigilanol tiglate is an IT treatment and efficacious dosing is based on tumor volume and not BSA, efficacy results and factors such as dosing trend have the potential to be confounded.

Although MTD was not reached in the study, the dose escalation in Stage 2 ceased at the Cohort 8 dose level of 3·60 mg/m2, which was deemed to have provided an appropriate balance between safety and potential efficacy. Most TEAEs were related to the local effects and mechanism of action of tigilanol tiglate. TEAEs generally were managed with symptomatic therapy such as analgesics. Some systemic exposure to tigilanol tiglate occurred and, within the range of doses used, the AEs related to systemic exposure appeared to be mild to moderate, self-limited and of short duration. Future studies will consider multiple administration of lower doses.

This study used the RECIST 1·1 guidelines [23] for tumor assessment (by calipers and CT scan), which require tumors to be at least 10 mm in one dimension, limiting the ability to include smaller tumors. The LEC allowed patients with insufficient tumor volumes to qualify for dose escalation to be treated with a tigilanol tiglate dose that had been tolerated in a dose-escalation cohort. The median volume of target tumors treated in the dose-escalation cohort was 37% (range: 1% to 100%, average 50%) of the BSA. In contrast, the median volume of target tumors treated in the LEC was 100% (range: 76% to 100%, average 94%) of the BSA. In addition, the range of tumor volumes in the dose-escalation cohort was very large (up to ~500 cm3), whereas the LEC had a much smaller range (up to 4·5 cm3). These factors help explain the better responses in the LEC, which may reflect more accurately the tumor responses anticipated when tigilanol tiglate is dosed according to tumor volume in later phase trials. Three patients in the LEC, one with three treated metastatic melanomas on the arm, one with axillary node recurrence in a previously dissected area, and one with angiosarcoma on the nose, experienced complete response; two experienced complete response within the study period and one eventually outside this period. Both melanomas demonstrated an anenestic effect. Possible mechanisms for the regression of a tumor outside the scope of localized treatment could include exposure to tigilanol tiglate by lymphatic drainage, bystander inflammatory response, or systemic activation of the immune system. The responses in the LEC are especially encouraging, as signals of clinical efficacy were identified despite the many inter- and intra-patient variables including; a variety of tumors and tumor volumes, different anatomic sites and a predominantly late stage setting.

With respect to PK outcomes, systemic exposure following IT dosing might be explained by the known high vascularity of tumors. However, the study did not measure IT concentrations, although animal data have demonstrated that tigilanol tiglate levels achieved within the tumor were significantly higher than those reached in plasma [7].

5. Conclusion

In this first-in-human Phase I study, IT administration of tigilanol tiglate was generally well tolerated, the MTD was not reached, and signals of clinical efficacy were identified across nine tumor types. Four patients- two with metastatic melanoma, one with atypical fibroxanthoma and one with angiosarcoma -achieved complete response in the treated lesions, with the two melanoma patients demonstrating an anenestic response. These results support the continued development of tigilanol tiglate for IT administration into a Phase II efficacy trial and provide evidence of the potential role of PKC activation in the treatment of solid tumors.

Data sharing

QBiotics Group will make available to qualified scientific and medical researchers, upon signing a data access agreement, de-identified data that underlie the results of the study reported in this Article including text, tables, figures and appendices. Email requests for the data should be made to qbioticspublications@qbiotics.com. Provision of data will be completed without external investigator support.

Declaration of Competing Interest

BP reports consulting fees received from QBiotics for the development of future trials with EBC-46. PDS reports personal fees from BioSceptre Australia Pty Ltd, outside the submitted work. All other authors declare no competing interests.

Acknowledgments

We thank the patients and families, who participated in the Tigilanol Tiglate (EBC-46) trial. We would also like to thank all the study teams involved at the participating sites.

Funding source

QBiotics sponsored and funded the study and were involved in study design and study management support. QBiotics funded; INC Research and EMR Associates (independent project management and study monitoring) and Wolff Medical Communications (manuscript writing and editorial support). QBiotics had no input into data acquisition, data analysis, data interpretation or manuscript preparation, but assisted with minor adjustments to the final draft. All authors had access to the raw data and had final responsibility for the submitted paper.

References

[1]J.H. Muller, E. HeldDirect injection under operation of malignant tumors and areas of neoplastic infiltration with colloidal radioactive gold, Au198Gynaecologia, 131 (1951), p. 389−94Google Scholar[2]E.P. Goldberg, A.R. Hadba, B.A. Almond, J.S MarottaIntratumoral cancer chemotherapy and immunotherapy: opportunities for nonsystemic preoperative drug deliveryJ Pharm Pharmacol, 54 (2002), p. 159−80Google Scholar[3]P. Ellmark, S.M. Mangsbo, C. Furebring, C. Norlén, T.H TöttermanTumor-directed immunotherapy can generate tumor-specific T-cell responses through localized co-stimulationCancer Immunol Immunother, 66 (2017), p. 1−7Google Scholar[4]S.S. AgarwalaIntralesional therapy for advanced melanoma: promise and limitationCurr Opin Oncol, 27 (2015), p. 151−56Google Scholar[5]D. Nelson, S. Fisher, B RobinsonThe “Trojan Horse” approach to tumor immunotherapy: targeting the tumor microenvironmentJ Immunol Res, 2014 (2014), Article 789069Google Scholar[6]E.L. Grant, H.M. Wallace, S.J. Trueman, P.W. Reddell, S.M OgbourneFloral and reproductive biology of the medicinally significant rainforest tree, fontainea picrospermia (Ephorbiaceae)Ind Crop Prod, 108 (2018), p. 416−22Google Scholar[7]C.M.E. Barnett, N. Broit, P.-.Y. Yap, et al.Optimizing intratumoral treatment of squamous head and neck tumoral models with the diterpine ester Tigilanol tiglateInvest New Drugs (2018)[Epub ahead of print]Google Scholar[8]J. Campbell, C. Poulos, S LowdenUsing triamcinolone in combination with the investigational anticancer agent EBC-46 (tigilanol tiglate) in the local treatment of a canine subcutaneous mast cell tumourCVE Control Therapy Ser, 286 (2014), p. 11−7Google Scholar[9]GM Boyle, MMA D’Souza, CJ Pierce, et al.Intra-lesional injection of the novel pkc activator EBC-46 rapidly ablates tumors in mouse modelsPLoS ONE, 9 (2014), Article e108887CrossRefGoogle Scholar[10]J. Cullen, G. Boyle, M. D’Souza, et al.Investigating a naturally occurring small molecule, EBC-46, as an immunotherapeutic agent to help treat cancerEur J Cancer, 69 (Suppl 1) (2016), p. S153ArticleDownload PDFGoogle Scholar[11]M. Cooke, A. Magimaidas, V. Cesado-Medrano, M.G KazanietzProtein kinase c in cancer: the top five unanswered questionsMol Carcinog, 56 (2017), p. 1531−42Google Scholar[12]M.L. Drummond, K.E. PrehodaMolecular control of atypical protein kinase C: tipping the balance between self-renewal and differentiationJ Mol Biol, 428 (2016), p. 1455−64Google Scholar[13]E.O. Harrington, J. Löffler, P.R. Nelson, K.G. Kent, M. Simons, J.A WareEnhancement of migration by protein kinase Cα and inhibition of proliferation and cell cycle progression by protein kinase Cδ in capillary endothelial cellsJ Biol Chem, 272 (1997), p. 7390−7Google Scholar[14]A.C. NewtonProtein kinase C: structure, function and regulationJ Biol Chem, 270 (1995), p. 28495−8Google Scholar[15]A.C. Newton, J. BrognardReversing the paradigm: protein kinase c as a tumor suppressorTrends Pharmacol Sci, 38 (2017), p. 438−47Google Scholar[16]CM Dowling, J Phelan, JA Callender, et al.Protein kinase C beta II suppresses colorectal cancer by regulating IGF-1 mediated cell survivalOncotarget, 7 (2016), p. 20919−33Google Scholar[17]M. Hidaki, H. Nakakuma, T. Kawaguchi, et al.Altered expression of protein kinase c in adult T-cell leukemia cellsInt J Hematol, 56 (1992), p. 135−41Google Scholar[18]S.R. Melo, E.V. Januário, E. Zanutto, S. Lowden, J.A MateraTime-assessed infra-red thermal characterization of canine cutaneous mast cell tumors (cMCT) treated intratumorally with the investigational anticancer agent Tigilanol Tiglate (EBC-46)Proceedings of theveterinary cancer society annual conference 2017, Portland, OR; October 26-28 (2017)Google Scholar[19]J. Miller, J. Campbell, A. Blum, et al.Dose characterization of the investigational anticancer drug tigilanol tiglate (EBC-46) in the local treatment of canine mast cell tumoursFront Vet Sci, 6 (2019), pp. 1-10ArticleDownload PDFCrossRefView Record in ScopusGoogle Scholar[20]M.M. Oken, R.H. Creech, D.C. Tormey, et al.Toxicity and response criteria of the eastern cooperative oncology groupAm J Clin Oncol, 5 (1982), p. 649−55Google Scholar[21]D. Du Bois, E.F Du BoisA formula to estimate the approximate surface area if height and weight be knownArch Int Med, 17 (1916), p. 863−71Google Scholar[22]C. Le Tourneau, J.J. Lee, L.L SiuDose escalation methods in phase i cancer clinical trialsJ Natl Cancer Inst, 101 (2009), p. 708−20Google Scholar[23]E.A. Eisenhauer, P. Therasse, J. Bogaerts, et al.New response evaluation criteria in solid tumours: revised Recist guideline (version 1.1)Eur J Cancer, 45 (2009), p. 228−47Google Scholar[24]National Institutes of Health. Common terminology criteria for adverse events (CTCAE) v4.03; June 14, 2010Google ScholarView Abstract© 2019 The Authors. Published by Elsevier B.V.

Recommended articles

No articles found.

Citing articles (0)

Article Metrics

No metrics available.

View details

We use cookies to help provide and enhance our service and tailor content and ads. By continuing you agree to the use of cookies.

Copyright © 2019 Elsevier B.V. or its licensors or contributors. ScienceDirect ® is a registered trademark of Elsevier B.V.

ScienceDirect ® is a registered trademark of Elsevier B.V.

Home

Main menu

Secondary menu

TwitterFacebookGoogle+LinkedInThis site (RSS)Search

Why human health must be at the center of climate action

Charlotte Erbsøll and Vincent GauthierWednesday, December 11, 2019 – 2:00am

The United Nations General Assembly week in New York in September was a global stock-taking exercise aimed at understanding where the world collectively stands on progress toward the Sustainable Development Goals (SDGs) ahead of the 10 years remaining to achieve the 2030 agenda.

That week of stock-taking identified that although we have made progress in certain areas — such as infant and maternal mortality, poverty and infectious diseases — we are falling dangerously behind in efforts to reach the Global Goals. The natural environment is rapidly deteriorating because of climate change and collapsing ecosystems, global hunger is on the rise and at least half of the world’s population lacks access to essential healthcare services.

Two of the greatest challenges facing the 2030 agenda, climate change and public health, were strongly displayed in September. The U.N. Secretary General’s Climate Summit brought together world leaders to ramp up ambition for climate mitigation. By the summit, 65 countries committed to net zero greenhouse gas emissions by 2050 and 87 companies had joined the “Business Ambition for 1.5˚C- Our Only Future” campaign. (As of Dec. 11, 177 companies had signed the pledge). Alongside the Climate Summit, the U.N. hosted the High Level Political Forum on Universal Health Coverage, where countries signed the Political Declaration on “On Universal Health Coverage: moving together to build a healthier world” (PDF).

By using health as a leading indicator of progress, companies will find a compelling business case for action by uncovering cost savings and risk reductions that otherwise would go unseen.

Although the high-level meetings on climate change and universal health coverage were held as separate negotiations in September, growing evidence suggests that the systematic failures inducing these grave challenges are interconnected.

A recent Lancet report explains that the challenges facing obesity, undernutrition and climate change make up a syndemic (synergies of epidemics) “because they co-occur in time and place, interact with each other to produce complex sequelae, and share common underlying societal drivers.” Another Lancet commission publication, “Food in the Anthropocene: the EAT-Lancet Commission on healthy diets from sustainable food systems,” demonstrates that existing policies, incentives and subsidies in the food system cause unhealthy diets and unsustainable agricultural practices simultaneously. These reports demonstrate that the common systemic drivers that cause our global institutions to produce results that hinder the 2030 agenda require a holistic and multidisciplinary approach to create long-lasting solutions.

That is why the U.N. Global Compact’s “Health is Everyone’s Business” action platform in September published the “Business Leadership Brief for Healthy Planet, Healthy People.” Launched at a side event to the U.N. General Assembly, the report calls on businesses to take an integrated approach to simultaneously improve the health of people and the planet. The report highlights that many challenges facing the planet and the health of people are interlinked: air pollution and climate change; water, sanitation and hygiene; and food and nutrition (see below).

The private sector has a substantial role to play in addressing the joint challenges facing the health of people and the planet. Companies can exacerbate these challenges by, among other things, releasing greenhouse gas emissions, having suppliers in areas without access to proper sanitation and hygiene, and having employees with unhealthy diets that hamper their productivity.

The private sector also can positively contribute to solving these challenges. “Especially through energy renovation of buildings, we can contribute simultaneously to addressing environmental and health concerns, to the benefit of residents and the planet,” said Mirella Vitale, senior vice president for marketing, communications and public affairs at ROCKWOOL Group.

The findings of the report highlight three key insights that can help companies create effective and lasting solutions that address the health of people and planet.

1. Understand and communicate the business case for action

Addressing environmental and climate determinants of health can provide strong business outcomes across many touchpoints in the value chain.

In the report, Steve Rochlin, CEO of Impact ROI, highlights mounting evidence that companies that take an integrated approach to climate and environment outperform their competitors across a range of vital key performance indicators (KPIs) including increased share price by as much as 6 percent and increased sales value by as much as 20 percent.

Mette Søs Lassesen, market director for the Environment & Health business at Ramboll, an engineering, design and consultancy company, reports that “most of our environment-related work focuses on human health outcomes, as well as environmental impacts — the two are inextricably related. We have found that business strategies that include health and well-being as a component of a broader sustainability focus improve competitive advantage and increase market opportunity.”

Minimizing health risks associated with air pollution, climate risks, poor water quality, sanitation and hygiene, and poor diets can reduce absenteeism, reduce presenteeism, reduce healthcare costs, increase productivity, and increase employee retention. Considering air pollution as an example, industry studies have found that poor air quality reduces consumption, hinders executive recruitment and contributes substantial healthcare costs to the company.

2. Taking a systems approach is necessary to develop integrated and long-lasting solutions

Sally Uren, CEO of Forum for the Future, wrote in the business leadership brief: “We need to acknowledge the deeply interconnected nature of challenges we are facing , and accept that addressing them will require fundamental changes in the way we think and operate.”

In her recent article on GreenBiz, Uren outlined six steps to build a sustainability strategy based on a systems approach. Consider those steps in the context of taking action on the interconnected challenges facing the health of people and planet:

  • Understand the world as a set of interconnected issues. It is necessary to see the interconnections between challenges such as air pollution and respiratory illness and find the transformative solutions that have the greatest co-benefits across the system. For example, transforming the food system can have co-benefits for the health of the planet and the health of people. Jessica Appelgren, vice president of communications at Impossible Foods, explained: “Compared to using cows to produce beef, plant-based meat uses less water, less land, and it is fundamentally a lot less expensive to produce, which means that at scale, we should be able to produce meats that are not only delicious but more affordable. This will have a huge impact on global food security, while also sparing the earth’s surface for biodiversity and wildlife.”
  • Identify where you can make the biggest possible impact on the system, ideally in a way that drives value back to the business, either directly or indirectly. In order for your strategy to tackle challenges facing the health of the planet and people to be sustainable over time, it must include benefits to the business such as reduced healthcare costs or any other business case examples we presented above.
  • Design clear theories of change. Action towards challenges such as air pollution must be supported by clear change models that highlight the inputs, activities, outputs, outcomes and impact. Using such models allows for continued evaluation of assumptions about the linkages between actions and outcomes.
  • Design for transformational, not incremental change. The challenges of environmental degradation and human health require immediate transformative action. Strategies must be directed towards the conditions that constrict the functioning of the system. For example, the food system must remove wasteful agricultural subsidies that promote unsustainable practices and make unhealthy foods cheaper than fruits, vegetables and whole grains.
  • Be clear about what you can do alone and when you need to collaborate. Many challenges at the nexus of human and environmental health affect public goods such as urban air quality, and therefore cannot be solved individually. Forming collaborations with the right incentives for each stakeholder to push the ambition of the group can generate great success in tackling these challenges. Anna Brodowsky, vice president of public affairs at Essity, a hygiene and health company, emphasized the need for collaboration:”Poor hygiene and sanitation constitute barriers for the health, livelihood and well-being of millions of people and we believe that in order to generate global solutions we must work together to promote health and continue our innovations addressing both people and environmental needs.”
  • Check your assumptions about how change happens. Solutions to complex issues are not often successful without multi-pronged approaches that shift mindsets and change.

3. Strategically integrate health into your environment and climate strategies

The last key insight outlined in our report is the need for strategic integration of health into environmental strategies across the value chain. Companies demonstrating leadership on planetary health challenges exhibit competencies in working collaboratively across disciplines and functional silos and across organizational boundaries to serve people and the planet. Two attributes are essential to successful business leadership on planetary health:

The first is the mastery of intent — the ability to intentionally design and implement solutions, such as programs, policies and products, which tackle global problems at the intersection of public health and the environment, thereby achieving more than the sums of both parts.

The second is the mastery of integration — the ability to design a corporate strategy that aligns teams, policies and targets around these integrated solutions.

Ambitious action to solve challenges facing the health of people and planet requires that companies design solutions at the intersection of public health and the environment built within a corporate strategy that aligns the proper teams, policies and targets. The Health and Environment Strategy integration matrix below shows that companies must reach quadrant D through forming integrated strategic value of health across the value chain.

Health as a leading indicator for environmental progress

In order to achieve the Sustainable Development Goals by 2030, we must put aside incremental change and target transformative opportunities that realign the ways in which systems operate.

When it comes to achieving a healthy planet for healthy people, we believe human health must become a leading indicator for environmental progress. With 23 percent, or 12.6 million, deaths globally attributed to environmental risk and $5.11 trillion in welfare losses every year caused by air pollution, transformative change will take place only if companies begin to measure the health and welfare losses associated with their environmental impact.

Ambitious action to solve challenges facing the health of people and planet requires that companies design solutions at the intersection of public health and the environment.

By using health as a leading indicator of progress for environmental and climate action, companies will find a compelling business case for action by uncovering cost savings and risk reductions that otherwise would go unseen. Finally, tying progress with the human and emotional case of human health improvement can elevate the attractiveness of solutions to business leaders, employees and consumers.

Leading businesses understand the urgency of taking ambitious action on planetary health and more need to follow suit. Pam Cheng, executive vice president of operations and IT at AstraZeneca, stressed the urgency to act:  “Our collective response to climate change over the next 10 years will define health and wellness globally for generations to come. We do not have the next 50 years to make a difference. The time is now.”Topics: 

Tags: 

share this article

 

Charlotte Erbsøll

Senior Advisor, Health
United Nations Global Compact

 

Vincent Gauthier

Project Manager
United Nations Global Compact

Related Content

Why businesses can't wash their hands of water and hygiene featured image

Why businesses can’t wash their hands of water and hygiene

ByWill Sarni

Planetary health, sustainability

The drive to embed ‘planetary health’ impacts within corporate sustainability strategy

ByVincent Gauthier

Water pipe coming out of a wall

Sustainable Development Goal 6: Ensure availability of water, sanitation

ByJay Shah

Unilever office building

How Unilever integrates the SDGs into corporate strategy

ByJames Murray

Gavel, scale and grains of barley

Organic Green New Deal? Comprehensive climate change policy must address the American food system

ByLisa Archer

Trending

Episode 200: Biomimicry maven talks Project Positive, Walmart exec chats up Project Gigaton featured image

Episode 200: Biomimicry maven talks Project Positive, Walmart exec chats up Project Gigaton

Haley Lowry, Dow

How Dow seeks to turn plastic waste into a circular resource

Engie building

Engie, University of Iowa say ‘I do’ to 50-year, $1 billion+ partnership

Puerto Rico’s electricity system is at a crossroads featured image

Puerto Rico’s electricity system is at a crossroads

Globe with shipping boxes on computer

CDP: Greening corporate supply chains could deliver a gigaton of carbon savings

Featured Videos

Why utilities and cities are joining REBA

More from Video

Featured Whitepapers

The Sustainable Edge™

Open Standards eBook

2019 State of Green Business Report
Get your copy
GreenBiz 20
View Program

Subscribe to our Newsletters

*Email Address:GreenBuzz: newsletter covering the latest sustainable business news, trends & analysis. (Delivered weekly)Transport Weekly: newsletter exploring transportation & mobility marketplace news, trends & analysis. (Delivered weekly)VERGE Weekly: newsletter exploring the technologies & trends accelerating the clean economy. (Delivered weekly)Energy Weekly: newsletter exploring energy marketplace news, trends & analysis. (Delivered weekly)Circular Weekly: newsletter exploring the circular economy marketplace news, trends & analysis. (Delivered weekly)By completing this form, you agree to GreenBiz Group’s privacy policy. To view our policy, please click here.Submit

Footer menu 1

Footer menu 2

Footer menu 3

TwitterFacebookGoogle+LinkedInThis site (RSS)

Secondary Footer menu

© 2019 GreenBiz Group Inc. GREENBIZ® and GREENBIZ.COM® are registered trademarks of GreenBiz Group Inc.

 

Switch to White

Blog

Dec 10, 2019

After Transplant Procedure, Man’s Semen Contains Only Donor’s DNA

Posted by Paul Battista in categories: biotech/medicalfuturism

Zoom

Chris Long is an IT worker in the Washoe County Sheriff’s Department in Reno, Nevada. But all the DNA in his semen belongs to a German man he’s never met.

That’s because Long received a bone marrow transplant from the European stranger four years ago — and the unexpected impact it has had on his biology could affect the future of forensic science.

According to a newly published New York Times story, the purpose of the transplant was to treat Long’s acute myeloid leukemia, a type of cancer that prevents the body from producing blood normally.Read more

0 comments

Leave a reply

 Name (required) Email (will not be published) (required) Website
Submit comment

 Log in for authorized contributors.

  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  

Tag cloud

agingAIAlzheimer’santi-agingbioquantinebioquarkbiotechbiotechnologybitcoinblockchainbrain deathcancercryptocurrencycultureDeathdesignexistential risksextinctionfuturefuturismGoogleHarry J. BenthamhealthhealthspanhumanityimmortalityInterstellar Travelira pastorLife extensionlifespanlongevityNASANeurosciencepoliticsreanimaregenerageregenerationresearchriskssingularityspacesustainabilitytechnologytranshumanismwellness

Categories

Top 10 Authors

show all

Blogroll

© 2002–2019 Lifeboat Foundation

 
Banner logo

Join the WPWP Campaign to help improve Wikipedia articles with photos and win a prize!

Hide
Page semi-protected

Computer

From Wikipedia, the free encyclopedia
 
 
Jump to navigationJump to search
Computer
Acer Aspire 8920 Gemstone.jpgColumbia Supercomputer - NASA Advanced Supercomputing Facility.jpgIntertec Superbrain.jpg
2010-01-26-technikkrempel-by-RalfR-05.jpgThinking Machines Connection Machine CM-5 Frostburg 2.jpgG5 supplying Wikipedia via Gigabit at the Lange Nacht der Wissenschaften 2006 in Dresden.JPG
DM IBM S360.jpgAcorn BBC Master Series Microcomputer.jpgDell PowerEdge Servers.jpg
Computers and computing devices from different eras

computer is a machine that can be instructed to carry out sequences of arithmetic or logical operations automatically via computer programming. Modern computers have the ability to follow generalized sets of operations, called programs. These programs enable computers to perform an extremely wide range of tasks. A “complete” computer including the hardware, the operating system (main software), and peripheral equipment required and used for “full” operation can be referred to as a computer system. This term may as well be used for a group of computers that are connected and work together, in particular a computer network or computer cluster.

Computers are used as control systems for a wide variety of industrial and consumer devices. This includes simple special purpose devices like microwave ovens and remote controls, factory devices such as industrial robots and computer-aided design, and also general purpose devices like personal computers and mobile devices such as smartphones. The Internet is run on computers and it connects hundreds of millions of other computers and their users.

Early computers were only conceived as calculating devices. Since ancient times, simple manual devices like the abacus aided people in doing calculations. Early in the Industrial Revolution, some mechanical devices were built to automate long tedious tasks, such as guiding patterns for looms. More sophisticated electrical machines did specialized analog calculations in the early 20th century. The first digital electronic calculating machines were developed during World War II. The first semiconductor transistors in the late 1940s were followed by the silicon-based MOSFET (MOS transistor) and monolithic integrated circuit (IC) chip technologies in the late 1950s, leading to the microprocessor and the microcomputer revolution in the 1970s. The speed, power and versatility of computers have been increasing dramatically ever since then, with MOS transistor counts increasing at a rapid pace (as predicted by Moore’s law), leading to the Digital Revolution during the late 20th to early 21st centuries.

Conventionally, a modern computer consists of at least one processing element, typically a central processing unit (CPU) in the form of a metal-oxide-semiconductor (MOS) microprocessor, along with some type of computer memory, typically MOS semiconductor memory chips. The processing element carries out arithmetic and logical operations, and a sequencing and control unit can change the order of operations in response to stored informationPeripheral devices include input devices (keyboards, mice, joystick, etc.), output devices (monitor screens, printers, etc.), and input/output devices that perform both functions (e.g., the 2000s-era touchscreen). Peripheral devices allow information to be retrieved from an external source and they enable the result of operations to be saved and retrieved.

Etymology

A human computer.
 
A human computer, with microscope and calculator, 1952

According to the Oxford English Dictionary, the first known use of the word “computer” was in 1613 in a book called The Yong Mans Gleanings by English writer Richard Braithwait: “I haue [sic] read the truest computer of Times, and the best Arithmetician that euer [sic] breathed, and he reduceth thy dayes into a short number.” This usage of the term referred to a human computer, a person who carried out calculations or computations. The word continued with the same meaning until the middle of the 20th century. During the latter part of this period women were often hired as computers because they could be paid less than their male counterparts.[1] By 1943, most human computers were women.[2]

The Online Etymology Dictionary gives the first attested use of “computer” in the 1640s, meaning “one who calculates”; this is an “agent noun from compute (v.)”. The Online Etymology Dictionary states that the use of the term to mean “‘calculating machine’ (of any type) is from 1897.” The Online Etymology Dictionary indicates that the “modern use” of the term, to mean “programmable digital electronic computer” dates from “1945 under this name; [in a] theoretical [sense] from 1937, as Turing machine“.[3]

History

Pre-20th century

 
The Ishango bone, a bone tool dating back to prehistoric Africa.

Devices have been used to aid computation for thousands of years, mostly using one-to-one correspondence with fingers. The earliest counting device was probably a form of tally stick. Later record keeping aids throughout the Fertile Crescent included calculi (clay spheres, cones, etc.) which represented counts of items, probably livestock or grains, sealed in hollow unbaked clay containers.[4][5] The use of counting rods is one example.

 
The Chinese suanpan (算盘). The number represented on this abacus is 6,302,715,408.

The abacus was initially used for arithmetic tasks. The Roman abacus was developed from devices used in Babylonia as early as 2400 BC. Since then, many other forms of reckoning boards or tables have been invented. In a medieval European counting house, a checkered cloth would be placed on a table, and markers moved around on it according to certain rules, as an aid to calculating sums of money.[6]

 
The Antikythera mechanism, dating back to ancient Greece circa 150–100 BC, is an early analog computing device.

The Antikythera mechanism is believed to be the earliest mechanical analog “computer”, according to Derek J. de Solla Price.[7] It was designed to calculate astronomical positions. It was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to c. 100 BC. Devices of a level of complexity comparable to that of the Antikythera mechanism would not reappear until a thousand years later.

Many mechanical aids to calculation and measurement were constructed for astronomical and navigation use. The planisphere was a star chart invented by Abū Rayhān al-Bīrūnī in the early 11th century.[8] The astrolabe was invented in the Hellenistic world in either the 1st or 2nd centuries BC and is often attributed to Hipparchus. A combination of the planisphere and dioptra, the astrolabe was effectively an analog computer capable of working out several different kinds of problems in spherical astronomy. An astrolabe incorporating a mechanical calendar computer[9][10] and gear-wheels was invented by Abi Bakr of IsfahanPersia in 1235.[11] Abū Rayhān al-Bīrūnī invented the first mechanical geared lunisolar calendar astrolabe,[12] an early fixed-wired knowledge processing machine[13] with a gear train and gear-wheels,[14] c. 1000 AD.

The sector, a calculating instrument used for solving problems in proportion, trigonometry, multiplication and division, and for various functions, such as squares and cube roots, was developed in the late 16th century and found application in gunnery, surveying and navigation.

The planimeter was a manual instrument to calculate the area of a closed figure by tracing over it with a mechanical linkage.

The slide rule was invented around 1620–1630, shortly after the publication of the concept of the logarithm. It is a hand-operated analog computer for doing multiplication and division. As slide rule development progressed, added scales provided reciprocals, squares and square roots, cubes and cube roots, as well as transcendental functions such as logarithms and exponentials, circular and hyperbolic trigonometry and other functions. Slide rules with special scales are still used for quick performance of routine calculations, such as the E6B circular slide rule used for time and distance calculations on light aircraft.

In the 1770s, Pierre Jaquet-Droz, a Swiss watchmaker, built a mechanical doll (automaton) that could write holding a quill pen. By switching the number and order of its internal wheels different letters, and hence different messages, could be produced. In effect, it could be mechanically “programmed” to read instructions. Along with two other complex machines, the doll is at the Musée d’Art et d’Histoire of NeuchâtelSwitzerland, and still operates.[15]

The tide-predicting machine invented by Sir William Thomson in 1872 was of great utility to navigation in shallow waters. It used a system of pulleys and wires to automatically calculate predicted tide levels for a set period at a particular location.

The differential analyser, a mechanical analog computer designed to solve differential equations by integration, used wheel-and-disc mechanisms to perform the integration. In 1876, Lord Kelvin had already discussed the possible construction of such calculators, but he had been stymied by the limited output torque of the ball-and-disk integrators.[16] In a differential analyzer, the output of one integrator drove the input of the next integrator, or a graphing output. The torque amplifier was the advance that allowed these machines to work. Starting in the 1920s, Vannevar Bush and others developed mechanical differential analyzers.

First computing device

 
A portion of Babbage’s Difference engine.

Charles Babbage, an English mechanical engineer and polymath, originated the concept of a programmable computer. Considered the “father of the computer“,[17] he conceptualized and invented the first mechanical computer in the early 19th century. After working on his revolutionary difference engine, designed to aid in navigational calculations, in 1833 he realized that a much more general design, an Analytical Engine, was possible. The input of programs and data was to be provided to the machine via punched cards, a method being used at the time to direct mechanical looms such as the Jacquard loom. For output, the machine would have a printer, a curve plotter and a bell. The machine would also be able to punch numbers onto cards to be read in later. The Engine incorporated an arithmetic logic unitcontrol flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete.[18][19]

The machine was about a century ahead of its time. All the parts for his machine had to be made by hand – this was a major problem for a device with thousands of parts. Eventually, the project was dissolved with the decision of the British Government to cease funding. Babbage’s failure to complete the analytical engine can be chiefly attributed to political and financial difficulties as well as his desire to develop an increasingly sophisticated computer and to move ahead faster than anyone else could follow. Nevertheless, his son, Henry Babbage, completed a simplified version of the analytical engine’s computing unit (the mill) in 1888. He gave a successful demonstration of its use in computing tables in 1906.

Analog computers

 
Sir William Thomson‘s third tide-predicting machine design, 1879–81

During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers.[20] The first modern analog computer was a tide-predicting machine, invented by Sir William Thomson in 1872. The differential analyser, a mechanical analog computer designed to solve differential equations by integration using wheel-and-disc mechanisms, was conceptualized in 1876 by James Thomson, the brother of the more famous Lord Kelvin.[16]

The art of mechanical analog computing reached its zenith with the differential analyzer, built by H. L. Hazen and Vannevar Bush at MIT starting in 1927. This built on the mechanical integrators of James Thomson and the torque amplifiers invented by H. W. Nieman. A dozen of these devices were built before their obsolescence became obvious. By the 1950s, the success of digital electronic computers had spelled the end for most analog computing machines, but analog computers remained in use during the 1950s in some specialized applications such as education (control systems) and aircraft (slide rule).

Digital computers

Electromechanical

By 1938, the United States Navy had developed an electromechanical analog computer small enough to use aboard a submarine. This was the Torpedo Data Computer, which used trigonometry to solve the problem of firing a torpedo at a moving target. During World War II similar devices were developed in other countries as well.

 
Replica of Zuse‘s Z3, the first fully automatic, digital (electromechanical) computer.

Early digital computers were electromechanical; electric switches drove mechanical relays to perform the calculation. These devices had a low operating speed and were eventually superseded by much faster all-electric computers, originally using vacuum tubes. The Z2, created by German engineer Konrad Zuse in 1939, was one of the earliest examples of an electromechanical relay computer.[21]

In 1941, Zuse followed his earlier machine up with the Z3, the world’s first working electromechanical programmable, fully automatic digital computer.[22][23] The Z3 was built with 2000 relays, implementing a 22 bit word length that operated at a clock frequency of about 5–10 Hz.[24] Program code was supplied on punched film while data could be stored in 64 words of memory or supplied from the keyboard. It was quite similar to modern machines in some respects, pioneering numerous advances such as floating point numbers. Rather than the harder-to-implement decimal system (used in Charles Babbage‘s earlier design), using a binary system meant that Zuse’s machines were easier to build and potentially more reliable, given the technologies available at that time.[25] The Z3 was Turing complete.[26][27]

Vacuum tubes and digital electronic circuits

Purely electronic circuit elements soon replaced their mechanical and electromechanical equivalents, at the same time that digital calculation replaced analog. The engineer Tommy Flowers, working at the Post Office Research Station in London in the 1930s, began to explore the possible use of electronics for the telephone exchange. Experimental equipment that he built in 1934 went into operation five years later, converting a portion of the telephone exchange network into an electronic data processing system, using thousands of vacuum tubes.[20] In the US, John Vincent Atanasoff and Clifford E. Berry of Iowa State University developed and tested the Atanasoff–Berry Computer (ABC) in 1942,[28] the first “automatic electronic digital computer”.[29] This design was also all-electronic and used about 300 vacuum tubes, with capacitors fixed in a mechanically rotating drum for memory.[30]

Two women are seen by the Colossus computer.
 
Colossus, the first electronic digital programmable computing device, was used to break German ciphers during World War II.

During World War II, the British at Bletchley Park achieved a number of successes at breaking encrypted German military communications. The German encryption machine, Enigma, was first attacked with the help of the electro-mechanical bombes which were often run by women.[31][32] To crack the more sophisticated German Lorenz SZ 40/42 machine, used for high-level Army communications, Max Newman and his colleagues commissioned Flowers to build the Colossus.[30] He spent eleven months from early February 1943 designing and building the first Colossus.[33] After a functional test in December 1943, Colossus was shipped to Bletchley Park, where it was delivered on 18 January 1944[34] and attacked its first message on 5 February.[30]

Colossus was the world’s first electronic digital programmable computer.[20] It used a large number of valves (vacuum tubes). It had paper-tape input and was capable of being configured to perform a variety of boolean logical operations on its data, but it was not Turing-complete. Nine Mk II Colossi were built (The Mk I was converted to a Mk II making ten machines in total). Colossus Mark I contained 1,500 thermionic valves (tubes), but Mark II with 2,400 valves, was both 5 times faster and simpler to operate than Mark I, greatly speeding the decoding process.[35][36]

 
ENIAC was the first electronic, Turing-complete device, and performed ballistics trajectory calculations for the United States Army.

The ENIAC[37] (Electronic Numerical Integrator and Computer) was the first electronic programmable computer built in the U.S. Although the ENIAC was similar to the Colossus, it was much faster, more flexible, and it was Turing-complete. Like the Colossus, a “program” on the ENIAC was defined by the states of its patch cables and switches, a far cry from the stored program electronic machines that came later. Once a program was written, it had to be mechanically set into the machine with manual resetting of plugs and switches. The programmers of the ENIAC were six women, often known collectively as the “ENIAC girls”.[38][39]

It combined the high speed of electronics with the ability to be programmed for many complex problems. It could add or subtract 5000 times a second, a thousand times faster than any other machine. It also had modules to multiply, divide, and square root. High speed memory was limited to 20 words (about 80 bytes). Built under the direction of John Mauchly and J. Presper Eckert at the University of Pennsylvania, ENIAC’s development and construction lasted from 1943 to full operation at the end of 1945. The machine was huge, weighing 30 tons, using 200 kilowatts of electric power and contained over 18,000 vacuum tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors.[40]

Modern computers

Concept of modern computer

The principle of the modern computer was proposed by Alan Turing in his seminal 1936 paper,[41] On Computable Numbers. Turing proposed a simple device that he called “Universal Computing machine” and that is now known as a universal Turing machine. He proved that such a machine is capable of computing anything that is computable by executing instructions (program) stored on tape, allowing the machine to be programmable. The fundamental concept of Turing’s design is the stored program, where all the instructions for computing are stored in memory. Von Neumann acknowledged that the central concept of the modern computer was due to this paper.[42] Turing machines are to this day a central object of study in theory of computation. Except for the limitations imposed by their finite memory stores, modern computers are said to be Turing-complete, which is to say, they have algorithm execution capability equivalent to a universal Turing machine.

Stored programs

Three tall racks containing electronic circuit boards
 
A section of the Manchester Baby, the first electronic stored-program computer

Early computing machines had fixed programs. Changing its function required the re-wiring and re-structuring of the machine.[30] With the proposal of the stored-program computer this changed. A stored-program computer includes by design an instruction set and can store in memory a set of instructions (a program) that details the computation. The theoretical basis for the stored-program computer was laid by Alan Turing in his 1936 paper. In 1945, Turing joined the National Physical Laboratory and began work on developing an electronic stored-program digital computer. His 1945 report “Proposed Electronic Calculator” was the first specification for such a device. John von Neumann at the University of Pennsylvania also circulated his First Draft of a Report on the EDVAC in 1945.[20]

The Manchester Baby was the world’s first stored-program computer. It was built at the Victoria University of Manchester by Frederic C. WilliamsTom Kilburn and Geoff Tootill, and ran its first program on 21 June 1948.[43] It was designed as a testbed for the Williams tube, the first random-access digital storage device.[44] Although the computer was considered “small and primitive” by the standards of its time, it was the first working machine to contain all of the elements essential to a modern electronic computer.[45] As soon as the Baby had demonstrated the feasibility of its design, a project was initiated at the university to develop it into a more usable computer, the Manchester Mark 1Grace Hopper was the first person to develop a compiler for programming language.[2]

The Mark 1 in turn quickly became the prototype for the Ferranti Mark 1, the world’s first commercially available general-purpose computer.[46] Built by Ferranti, it was delivered to the University of Manchester in February 1951. At least seven of these later machines were delivered between 1953 and 1957, one of them to Shell labs in Amsterdam.[47] In October 1947, the directors of British catering company J. Lyons & Company decided to take an active role in promoting the commercial development of computers. The LEO I computer became operational in April 1951[48] and ran the world’s first regular routine office computer job.

Transistors

The concept of a field-effect transistor was proposed by Julius Edgar Lilienfeld in 1925. John Bardeen and Walter Brattain, while working under William Shockley at Bell Labs, built the first working transistor, the point-contact transistor, in 1947, which was followed by Shockley’s bipolar junction transistor in 1948.[49][50] From 1955 onwards, transistors replaced vacuum tubes in computer designs, giving rise to the “second generation” of computers. Compared to vacuum tubes, transistors have many advantages: they are smaller, and require less power than vacuum tubes, so give off less heat. Junction transistors were much more reliable than vacuum tubes and had longer, indefinite, service life. Transistorized computers could contain tens of thousands of binary logic circuits in a relatively compact space. However, early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, which limited them to a number of specialised applications.[51]

At the University of Manchester, a team under the leadership of Tom Kilburn designed and built a machine using the newly developed transistors instead of valves.[52] Their first transistorised computer and the first in the world, was operational by 1953, and a second version was completed there in April 1955. However, the machine did make use of valves to generate its 125 kHz clock waveforms and in the circuitry to read and write on its magnetic drum memory, so it was not the first completely transistorized computer. That distinction goes to the Harwell CADET of 1955,[53] built by the electronics division of the Atomic Energy Research Establishment at Harwell.[53][54]

 
MOSFET (MOS transistor), showing gate (G), body (B), source (S) and drain (D) terminals. The gate is separated from the body by an insulating layer (pink).

The metal–oxide–silicon field-effect transistor (MOSFET), also known as the MOS transistor, was invented by Mohamed M. Atalla and Dawon Kahng at Bell Labs in 1959.[55] It was the first truly compact transistor that could be miniaturised and mass-produced for a wide range of uses.[51] With its high scalability,[56] and much lower power consumption and higher density than bipolar junction transistors,[57] the MOSFET made it possible to build high-density integrated circuits.[58][59] In addition to data processing, it also enabled the practical use of MOS transistors as memory cell storage elements, leading to the development of MOS semiconductor memory, which replaced earlier magnetic-core memory in computers.[60] The MOSFET led to the microcomputer revolution,[61] and became the driving force behind the computer revolution.[62][63] The MOSFET is the most widely used transistor in computers,[64][65] and is the fundamental building block of digital electronics.[66]

Integrated circuits

The next great advance in computing power came with the advent of the integrated circuit (IC). The idea of the integrated circuit was first conceived by a radar scientist working for the Royal Radar Establishment of the Ministry of DefenceGeoffrey W.A. Dummer. Dummer presented the first public description of an integrated circuit at the Symposium on Progress in Quality Electronic Components in Washington, D.C. on 7 May 1952.[67]

The first working ICs were invented by Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor.[68] Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working integrated example on 12 September 1958.[69] In his patent application of 6 February 1959, Kilby described his new device as “a body of semiconductor material … wherein all the components of the electronic circuit are completely integrated”.[70][71] However, Kilby’s invention was a hybrid integrated circuit (hybrid IC), rather than a monolithic integrated circuit (IC) chip.[72] Kilby’s IC had external wire connections, which made it difficult to mass-produce.[73]

Noyce also came up with his own idea of an integrated circuit half a year later than Kilby.[74] Noyce’s invention was the first true monolithic IC chip.[75][73] His chip solved many practical problems that Kilby’s had not. Produced at Fairchild Semiconductor, it was made of silicon, whereas Kilby’s chip was made of germanium. Noyce’s monolithic IC was fabricated using the planar process, developed by his colleague Jean Hoerni in early 1959. In turn, the planar process was based on the silicon surface passivation and thermal oxidation processes developed by Mohamed Atalla at Bell Labs in the late 1950s.[76][77][78]

Modern monolithic ICs are predominantly MOS (metal-oxide-semiconductor) integrated circuits, built from MOSFETs (MOS transistors).[79] After the first MOSFET was invented by Mohamed Atalla and Dawon Kahng at Bell Labs in 1959,[80] Atalla first proposed the concept of the MOS integrated circuit in 1960, followed by Kahng in 1961, both noting that the MOS transistor’s ease of fabrication made it useful for integrated circuits.[51][81] The earliest experimental MOS IC to be fabricated was a 16-transistor chip built by Fred Heiman and Steven Hofstein at RCA in 1962.[82] General Microelectronics later introduced the first commercial MOS IC in 1964,[83] developed by Robert Norman.[82] Following the development of the self-aligned gate (silicon-gate) MOS transistor by Robert Kerwin, Donald Klein and John Sarace at Bell Labs in 1967, the first silicon-gate MOS IC with self-aligned gates was developed by Federico Faggin at Fairchild Semiconductor in 1968.[84] The MOSFET has since become the most critical device component in modern ICs.[85]

The development of the MOS integrated circuit led to the invention of the microprocessor,[86][87] and heralded an explosion in the commercial and personal use of computers. While the subject of exactly which device was the first microprocessor is contentious, partly due to lack of agreement on the exact definition of the term “microprocessor”, it is largely undisputed that the first single-chip microprocessor was the Intel 4004,[88] designed and realized by Federico Faggin with his silicon-gate MOS IC technology,[86] along with Ted HoffMasatoshi Shima and Stanley Mazor at Intel.[89][90] In the early 1970s, MOS IC technology enabled the integration of more than 10,000 transistors on a single chip.[59]

System on a Chip (SoCs) are complete computers on a microchip (or chip) the size of a coin.[91] They may or may not have integrated RAM and flash memory. If not integrated, The RAM is usually placed directly above (known as Package on package) or below (on the opposite side of the circuit board) the SoC, and the flash memory is usually placed right next to the SoC, this all done to improve data transfer speeds, as the data signals don’t have to travel long distances. Since ENIAC in 1945, computers have advanced enormously, with modern SoCs (Such as the Snapdragon 865) being the size of a coin while also being hundreds of thousands of times more powerful than ENIAC, integrating billions of transistors, and consuming only a few watts of power.

Mobile computers

The first mobile computers were heavy and ran from mains power. The 50lb IBM 5100 was an early example. Later portables such as the Osborne 1 and Compaq Portable were considerably lighter but still needed to be plugged in. The first laptops, such as the Grid Compass, removed this requirement by incorporating batteries – and with the continued miniaturization of computing resources and advancements in portable battery life, portable computers grew in popularity in the 2000s.[92] The same developments allowed manufacturers to integrate computing resources into cellular mobile phones by the early 2000s.

These smartphones and tablets run on a variety of operating systems and recently became the dominant computing device on the market.[93] These are powered by System on a Chip (SoCs), which are complete computers on a microchip the size of a coin.[91]

Types

Computers can be classified in a number of different ways, including:

By architecture

By size and form-factor

Hardware

File:Computer Components.webm
 
Video demonstrating the standard components of a “slimline” computer

The term hardware covers all of those parts of a computer that are tangible physical objects. Circuits, computer chips, graphic cards, sound cards, memory (RAM), motherboard, displays, power supplies, cables, keyboards, printers and “mice” input devices are all hardware.

History of computing hardware

First generation (mechanical/electromechanical)CalculatorsPascal’s calculatorArithmometerDifference engineQuevedo’s analytical machines
Programmable devicesJacquard loomAnalytical engineIBM ASCC/Harvard Mark IHarvard Mark IIIBM SSECZ1Z2Z3
Second generation (vacuum tubes)CalculatorsAtanasoff–Berry ComputerIBM 604UNIVAC 60UNIVAC 120
Programmable devicesColossusENIACManchester BabyEDSACManchester Mark 1Ferranti PegasusFerranti MercuryCSIRACEDVACUNIVAC IIBM 701IBM 702IBM 650Z22
Third generation (discrete transistors and SSI, MSI, LSI integrated circuits)MainframesIBM 7090IBM 7080IBM System/360BUNCH
MinicomputerHP 2116AIBM System/32IBM System/36LINCPDP-8PDP-11
Desktop ComputerHP 9100
Fourth generation (VLSI integrated circuits)MinicomputerVAXIBM System i
4-bit microcomputerIntel 4004Intel 4040
8-bit microcomputerIntel 8008Intel 8080Motorola 6800Motorola 6809MOS Technology 6502Zilog Z80
16-bit microcomputerIntel 8088Zilog Z8000WDC 65816/65802
32-bit microcomputerIntel 80386PentiumMotorola 68000ARM
64-bit microcomputer[94]AlphaMIPSPA-RISCPowerPCSPARCx86-64ARMv8-A
Embedded computerIntel 8048Intel 8051
Personal computerDesktop computerHome computerLaptop computer, Personal digital assistant (PDA), Portable computerTablet PCWearable computer
Theoretical/experimentalQuantum computerChemical computerDNA computingOptical computerSpintronics-based computer, Wetware/Organic computer 

Other hardware topics

Peripheral device (input/output)InputMousekeyboardjoystickimage scannerwebcamgraphics tabletmicrophone
OutputMonitorprinterloudspeaker
BothFloppy disk drive, hard disk driveoptical disc drive, teleprinter
Computer busesShort rangeRS-232SCSIPCIUSB
Long range (computer networking)EthernetATMFDDI

A general purpose computer has four main components: the arithmetic logic unit (ALU), the control unit, the memory, and the input and output devices (collectively termed I/O). These parts are interconnected by buses, often made of groups of wires. Inside each of these parts are thousands to trillions of small electrical circuits which can be turned off or on by means of an electronic switch. Each circuit represents a bit (binary digit) of information so that when the circuit is on it represents a “1”, and when off it represents a “0” (in positive logic representation). The circuits are arranged in logic gates so that one or more of the circuits may control the state of one or more of the other circuits.

Input devices

When unprocessed data is sent to the computer with the help of input devices, the data is processed and sent to output devices. The input devices may be hand-operated or automated. The act of processing is mainly regulated by the CPU. Some examples of input devices are:

Output devices

The means through which computer gives output are known as output devices. Some examples of output devices are:

Control unit

 
Diagram showing how a particular MIPS architecture instruction would be decoded by the control system

The control unit (often called a control system or central controller) manages the computer’s various components; it reads and interprets (decodes) the program instructions, transforming them into control signals that activate other parts of the computer.[95] Control systems in advanced computers may change the order of execution of some instructions to improve performance.

A key component common to all CPUs is the program counter, a special memory cell (a register) that keeps track of which location in memory the next instruction is to be read from.[96]

The control system’s function is as follows—note that this is a simplified description, and some of these steps may be performed concurrently or in a different order depending on the type of CPU:

  1. Read the code for the next instruction from the cell indicated by the program counter.
  2. Decode the numerical code for the instruction into a set of commands or signals for each of the other systems.
  3. Increment the program counter so it points to the next instruction.
  4. Read whatever data the instruction requires from cells in memory (or perhaps from an input device). The location of this required data is typically stored within the instruction code.
  5. Provide the necessary data to an ALU or register.
  6. If the instruction requires an ALU or specialized hardware to complete, instruct the hardware to perform the requested operation.
  7. Write the result from the ALU back to a memory location or to a register or perhaps an output device.
  8. Jump back to step (1).

Since the program counter is (conceptually) just another set of memory cells, it can be changed by calculations done in the ALU. Adding 100 to the program counter would cause the next instruction to be read from a place 100 locations further down the program. Instructions that modify the program counter are often known as “jumps” and allow for loops (instructions that are repeated by the computer) and often conditional instruction execution (both examples of control flow).

The sequence of operations that the control unit goes through to process an instruction is in itself like a short computer program, and indeed, in some more complex CPU designs, there is another yet smaller computer called a microsequencer, which runs a microcode program that causes all of these events to happen.

Central processing unit (CPU)

The control unit, ALU, and registers are collectively known as a central processing unit (CPU). Early CPUs were composed of many separate components. Since the 1970s, CPUs have typically been constructed on a single MOS integrated circuit chip called a microprocessor.

Arithmetic logic unit (ALU)

The ALU is capable of performing two classes of operations: arithmetic and logic.[97] The set of arithmetic operations that a particular ALU supports may be limited to addition and subtraction, or might include multiplication, division, trigonometry functions such as sine, cosine, etc., and square roots. Some can only operate on whole numbers (integers) while others use floating point to represent real numbers, albeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operation—although it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and return boolean truth values (true or false) depending on whether one is equal to, greater than or less than the other (“is 64 greater than 65?”). Logic operations involve Boolean logicANDORXOR, and NOT. These can be useful for creating complicated conditional statements and processing boolean logic.

Superscalar computers may contain multiple ALUs, allowing them to process several instructions simultaneously.[98] Graphics processors and computers with SIMD and MIMD features often contain ALUs that can perform arithmetic on vectors and matrices.

Memory

 
Magnetic-core memory (using magnetic cores) was the computer memory of choice in the 1960s, until it was replaced by semiconductor memory (using MOS memory cells).

A computer’s memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered “address” and can store a single number. The computer can be instructed to “put the number 123 into the cell numbered 1357” or to “add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595.” The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is the software’s responsibility to give significance to what the memory sees as nothing but a series of numbers.

In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers (28 = 256); either from 0 to 255 or −128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two’s complement notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory if it can be represented numerically. Modern computers have billions or even trillions of bytes of memory.

The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area. There are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. As data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer’s speed.

Computer main memory comes in two principal varieties:

RAM can be read and written to anytime the CPU commands it, but ROM is preloaded with data and software that never changes, therefore the CPU can only read from it. ROM is typically used to store the computer’s initial start-up instructions. In general, the contents of RAM are erased when the power to the computer is turned off, but ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates loading the computer’s operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the required software may be stored in ROM. Software stored in ROM is often called firmware, because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM, as it retains its data when turned off but is also rewritable. It is typically much slower than conventional ROM and RAM however, so its use is restricted to applications where high speed is unnecessary.[99]

In more sophisticated computers there may be one or more RAM cache memories, which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer’s part.

Input/output (I/O)

 
Hard disk drives are common storage devices used with computers.

I/O is the means by which a computer exchanges information with the outside world.[100] Devices that provide input or output to the computer are called peripherals.[101] On a typical personal computer, peripherals include input devices like the keyboard and mouse, and output devices such as the display and printerHard disk drivesfloppy disk drives and optical disc drives serve as both input and output devices. Computer networking is another form of I/O. I/O devices are often complex computers in their own right, with their own CPU and memory. A graphics processing unit might contain fifty or more tiny computers that perform the calculations necessary to display 3D graphics.[citation needed] Modern desktop computers contain many smaller computers that assist the main CPU in performing I/O. A 2016-era flat screen display contains its own computer circuitry.

Multitasking

While a computer may be viewed as running one gigantic program stored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved by multitasking i.e. having the computer switch rapidly between running each program in turn.[102] One means by which this is done is with a special signal called an interrupt, which can periodically cause the computer to stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running “at the same time”. then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time even though only one is ever executing in any given instant. This method of multitasking is sometimes termed “time-sharing” since each program is allocated a “slice” of time in turn.[103]

Before the era of inexpensive computers, the principal use for multitasking was to allow many people to share the same computer. Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly, in direct proportion to the number of programs it is running, but most programs spend much of their time waiting for slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a “time slice” until the event it is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run simultaneously without unacceptable speed loss.

Multiprocessing

 
Cray designed many supercomputers that used multiprocessing heavily.

Some computers are designed to distribute their work across several CPUs in a multiprocessing configuration, a technique once employed only in large and powerful machines such as supercomputersmainframe computers and servers. Multiprocessor and multi-core (multiple CPUs on a single integrated circuit) personal and laptop computers are now widely available, and are being increasingly used in lower-end markets as a result.

Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored-program architecture and from general purpose computers.[104] They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful only for specialized tasks due to the large scale of program organization required to successfully utilize most of the available resources at once. Supercomputers usually see usage in large-scale simulationgraphics rendering, and cryptography applications, as well as with other so-called “embarrassingly parallel” tasks.

Software

Software refers to parts of the computer which do not have a material form, such as programs, data, protocols, etc. Software is that part of a computer system that consists of encoded information or computer instructions, in contrast to the physical hardware from which the system is built. Computer software includes computer programslibraries and related non-executable data, such as online documentation or digital media. It is often divided into system software and application software Computer hardware and software require each other and neither can be realistically used on its own. When software is stored in hardware that cannot easily be modified, such as with BIOS ROM in an IBM PC compatible computer, it is sometimes called “firmware”.

Operating system /System SoftwareUnix and BSDUNIX System VIBM AIXHP-UXSolaris (SunOS), IRIXList of BSD operating systems
GNU/LinuxList of Linux distributionsComparison of Linux distributions
Microsoft WindowsWindows 95Windows 98Windows NTWindows 2000Windows MEWindows XPWindows VistaWindows 7Windows 8Windows 8.1Windows 10
DOS86-DOS (QDOS), IBM PC DOSMS-DOSDR-DOSFreeDOS
Macintosh operating systemsClassic Mac OSmacOS (previously OS X and Mac OS X)
Embedded and real-timeList of embedded operating systems
ExperimentalAmoebaOberon/BluebottlePlan 9 from Bell Labs
LibraryMultimediaDirectXOpenGLOpenALVulkan (API)
Programming libraryC standard libraryStandard Template Library
DataProtocolTCP/IPKermitFTPHTTPSMTP
File formatHTMLXMLJPEGMPEGPNG
User interfaceGraphical user interface (WIMP)Microsoft WindowsGNOMEKDEQNX Photon, CDEGEMAqua
Text-based user interfaceCommand-line interfaceText user interface
Application SoftwareOffice suiteWord processingDesktop publishingPresentation programDatabase management system, Scheduling & Time management, SpreadsheetAccounting software
Internet AccessBrowserEmail clientWeb serverMail transfer agentInstant messaging
Design and manufacturingComputer-aided designComputer-aided manufacturing, Plant management, Robotic manufacturing, Supply chain management
GraphicsRaster graphics editorVector graphics editor3D modelerAnimation editor3D computer graphicsVideo editingImage processing
AudioDigital audio editorAudio playbackMixingAudio synthesisComputer music
Software engineeringCompilerAssemblerInterpreterDebuggerText editorIntegrated development environmentSoftware performance analysisRevision controlSoftware configuration management
EducationalEdutainmentEducational gameSerious gameFlight simulator
GamesStrategyArcadePuzzleSimulationFirst-person shooterPlatformMassively multiplayerInteractive fiction
MiscArtificial intelligenceAntivirus softwareMalware scannerInstaller/Package management systemsFile manager

Languages

There are thousands of different programming languages—some intended to be general purpose, others useful only for highly specialized applications.

Programming languages
Lists of programming languagesTimeline of programming languagesList of programming languages by categoryGenerational list of programming languagesList of programming languagesNon-English-based programming languages
Commonly used assembly languagesARMMIPSx86
Commonly used high-level programming languagesAdaBASICCC++C#COBOLFortranPL/IREXXJavaLispPascalObject Pascal
Commonly used scripting languagesBourne scriptJavaScriptPythonRubyPHPPerl

Programs

The defining feature of modern computers which distinguishes them from all other machines is that they can be programmed. That is to say that some type of instructions (the program) can be given to the computer, and it will process them. Modern computers based on the von Neumann architecture often have machine code in the form of an imperative programming language. In practical terms, a computer program may be just a few instructions or extend to many millions of instructions, as do the programs for word processors and web browsers for example. A typical modern computer can execute billions of instructions per second (gigaflops) and rarely makes a mistake over many years of operation. Large computer programs consisting of several million instructions may take teams of programmers years to write, and due to the complexity of the task almost certainly contain errors.

Stored program architecture

 
Replica of the Manchester Baby, the world’s first electronic stored-program computer, at the Museum of Science and Industry in Manchester, England

This section applies to most common RAM machine–based computers.

In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer’s memory and are generally carried out (executed) in the order they were given. However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place in the program and to carry on executing from there. These are called “jump” instructions (or branches). Furthermore, jump instructions may be made to happen conditionally so that different sequences of instructions may be used depending on the result of some previous calculation or some external event. Many computers directly support subroutines by providing a type of jump that “remembers” the location it jumped from and another instruction to return to the instruction following that jump instruction.

Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, they may at times jump back to an earlier place in the text or skip sections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is met. This is called the flow of control within the program and it is what allows the computer to perform tasks repeatedly without human intervention.

Comparatively, a person using a pocket calculator can perform a basic arithmetic operation such as adding two numbers with just a few button presses. But to add together all of the numbers from 1 to 1,000 would take thousands of button presses and a lot of time, with a near certainty of making a mistake. On the other hand, a computer may be programmed to do this with just a few simple instructions. The following example is written in the MIPS assembly language:

 
  begin:
  addi $8, $0, 0           # initialize sum to 0
  addi $9, $0, 1           # set first number to add = 1
  loop:
  slti $10, $9, 1000       # check if the number is less than 1000
  beq $10, $0, finish      # if odd number is greater than n then exit
  add $8, $8, $9           # update sum
  addi $9, $9, 1           # get next number
  j loop                   # repeat the summing process
  finish:
  add $2, $8, $0           # put sum in output register

Once told to run this program, the computer will perform the repetitive addition task without further human intervention. It will almost never make a mistake and a modern PC can complete the task in a fraction of a second.

Machine code

In most computers, individual instructions are stored as machine code with each instruction being given a unique number (its operation code or opcode for short). The command to add two numbers together would have one opcode; the command to multiply them would have a different opcode, and so on. The simplest computers are able to perform any of a handful of different instructions; the more complex computers have several hundred to choose from, each with a unique numerical code. Since the computer’s memory is able to store numbers, it can also store the instruction codes. This leads to the important fact that entire programs (which are just lists of these instructions) can be represented as lists of numbers and can themselves be manipulated inside the computer in the same way as numeric data. The fundamental concept of storing programs in the computer’s memory alongside the data they operate on is the crux of the von Neumann, or stored program[citation needed], architecture. In some cases, a computer might store some or all of its program in memory that is kept separate from the data it operates on. This is called the Harvard architecture after the Harvard Mark I computer. Modern von Neumann computers display some traits of the Harvard architecture in their designs, such as in CPU caches.

While it is possible to write computer programs as long lists of numbers (machine language) and while this technique was used with many early computers,[105] it is extremely tedious and potentially error-prone to do so in practice, especially for complicated programs. Instead, each basic instruction can be given a short name that is indicative of its function and easy to remember – a mnemonic such as ADD, SUB, MULT or JUMP. These mnemonics are collectively known as a computer’s assembly language. Converting programs written in assembly language into something the computer can actually understand (machine language) is usually done by a computer program called an assembler.

 
A 1970s punched card containing one line from a Fortran program. The card reads: “Z(1) = Y + W(1)” and is labeled “PROJ039” for identification purposes.

Programming language

Programming languages provide various ways of specifying programs for computers to run. Unlike natural languages, programming languages are designed to permit no ambiguity and to be concise. They are purely written languages and are often difficult to read aloud. They are generally either translated into machine code by a compiler or an assembler before being run, or translated directly at run time by an interpreter. Sometimes programs are executed by a hybrid method of the two techniques.

Low-level languages

Machine languages and the assembly languages that represent them (collectively termed low-level programming languages) are generally unique to the particular architecture of a computer’s central processing unit (CPU). For instance, an ARM architecture CPU (such as may be found in a smartphone or a hand-held videogame) cannot understand the machine language of an x86 CPU that might be in a PC.[106] Historically a significant number of other cpu architectures were created and saw extensive use, notably including the MOS Technology 6502 and 6510 in addition to the Zilog Z80.

High-level languages

Although considerably easier than in machine language, writing long programs in assembly language is often difficult and is also error prone. Therefore, most practical programs are written in more abstract high-level programming languages that are able to express the needs of the programmer more conveniently (and thereby help reduce programmer error). High level languages are usually “compiled” into machine language (or sometimes into assembly language and then into machine language) using another computer program called a compiler.[107] High level languages are less related to the workings of the target computer than assembly language, and more related to the language and structure of the problem(s) to be solved by the final program. It is therefore often possible to use different compilers to translate the same high level language program into the machine language of many different types of computer. This is part of the means by which software like video games may be made available for different computer architectures such as personal computers and various video game consoles.

Program design

Program design of small programs is relatively simple and involves the analysis of the problem, collection of inputs, using the programming constructs within languages, devising or using established procedures and algorithms, providing data for output devices and solutions to the problem as applicable. As problems become larger and more complex, features such as subprograms, modules, formal documentation, and new paradigms such as object-oriented programming are encountered. Large programs involving thousands of line of code and more require formal software methodologies. The task of developing large software systems presents a significant intellectual challenge. Producing software with an acceptably high reliability within a predictable schedule and budget has historically been difficult; the academic and professional discipline of software engineering concentrates specifically on this challenge.

Bugs

 
The actual first computer bug, a moth found trapped on a relay of the Harvard Mark II computer

Errors in computer programs are called “bugs“. They may be benign and not affect the usefulness of the program, or have only subtle effects. But in some cases, they may cause the program or the entire system to “hang“, becoming unresponsive to input such as mouse clicks or keystrokes, to completely fail, or to crash. Otherwise benign bugs may sometimes be harnessed for malicious intent by an unscrupulous user writing an exploit, code designed to take advantage of a bug and disrupt a computer’s proper execution. Bugs are usually not the fault of the computer. Since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program’s design.[108] Admiral Grace Hopper, an American computer scientist and developer of the first compiler, is credited for having first used the term “bugs” in computing after a dead moth was found shorting a relay in the Harvard Mark II computer in September 1947.[109]

Networking and the Internet

 
Visualization of a portion of the routes on the Internet

Computers have been used to coordinate information between multiple locations since the 1950s. The U.S. military’s SAGE system was the first large-scale example of such a system, which led to a number of special-purpose commercial systems such as Sabre.[110] In the 1970s, computer engineers at research institutions throughout the United States began to link their computers together using telecommunications technology. The effort was funded by ARPA (now DARPA), and the computer network that resulted was called the ARPANET.[111] The technologies that made the Arpanet possible spread and evolved.

In time, the network spread beyond academic and military institutions and became known as the Internet. The emergence of networking involved a redefinition of the nature and boundaries of the computer. Computer operating systems and applications were modified to include the ability to define and access the resources of other computers on the network, such as peripheral devices, stored information, and the like, as extensions of the resources of an individual computer. Initially these facilities were available primarily to people working in high-tech environments, but in the 1990s the spread of applications like e-mail and the World Wide Web, combined with the development of cheap, fast networking technologies like Ethernet and ADSL saw computer networking become almost ubiquitous. In fact, the number of computers that are networked is growing phenomenally. A very large proportion of personal computers regularly connect to the Internet to communicate and receive information. “Wireless” networking, often utilizing mobile phone networks, has meant networking is becoming increasingly ubiquitous even in mobile computing environments.

 

Unconventional computers

A computer does not need to be electronic, nor even have a processor, nor RAM, nor even a hard disk. While popular usage of the word “computer” is synonymous with a personal electronic computer, the modern[112] definition of a computer is literally: “A device that computes, especially a programmable [usually] electronic machine that performs high-speed mathematical or logical operations or that assembles, stores, correlates, or otherwise processes information.”[113] Any device which processes information qualifies as a computer, especially if the processing is purposeful.[citation needed]

Future

There is active research to make computers out of many promising new types of technology, such as optical computersDNA computersneural computers, and quantum computers. Most computers are universal, and are able to calculate any computable function, and are limited only by their memory capacity and operating speed. However different designs of computers can give very different performance for particular problems; for example quantum computers can potentially break some modern encryption algorithms (by quantum factoring) very quickly.

Computer architecture paradigms

There are many types of computer architectures:

Of all these abstract machines, a quantum computer holds the most promise for revolutionizing computing.[114] Logic gates are a common abstraction which can apply to most of the above digital or analog paradigms. The ability to store and execute lists of instructions called programs makes computers extremely versatile, distinguishing them from calculators. The Church–Turing thesis is a mathematical statement of this versatility: any computer with a minimum capability (being Turing-complete) is, in principle, capable of performing the same tasks that any other computer can perform. Therefore, any type of computer (netbooksupercomputercellular automaton, etc.) is able to perform the same computational tasks, given enough time and storage capacity.

Artificial intelligence

A computer will solve problems in exactly the way it is programmed to, without regard to efficiency, alternative solutions, possible shortcuts, or possible errors in the code. Computer programs that learn and adapt are part of the emerging field of artificial intelligence and machine learning. Artificial intelligence based products generally fall into two major categories: rule based systems and pattern recognition systems. Rule based systems attempt to represent the rules used by human experts and tend to be expensive to develop. Pattern based systems use data about a problem to generate conclusions. Examples of pattern based systems include voice recognition, font recognition, translation and the emerging field of on-line marketing.

Professions and organizations

As the use of computers has spread throughout society, there are an increasing number of careers involving computers.

Computer-related professions
Hardware-relatedElectrical engineeringElectronic engineeringComputer engineeringTelecommunications engineeringOptical engineeringNanoengineering
Software-relatedComputer scienceComputer engineeringDesktop publishingHuman–computer interactionInformation technologyInformation systemsComputational scienceSoftware engineeringVideo game industryWeb design

The need for computers to work well together and to be able to exchange information has spawned the need for many standards organizations, clubs and societies of both a formal and informal nature.

Organizations
Standards groupsANSIIECIEEEIETFISOW3C
Professional societiesACMAISIETIFIPBCS
Free/open source software groupsFree Software FoundationMozilla FoundationApache Software Foundation

See also

References

  1. ^ Evans 2018, p. 23.
  2. Jump up to:a b Smith 2013, p. 6.
  3. ^ “computer (n.)”Online Etymology Dictionary.
  4. ^ According to Schmandt-Besserat 1981, these clay containers contained tokens, the total of which were the count of objects being transferred. The containers thus served as something of a bill of lading or an accounts book. In order to avoid breaking open the containers, first, clay impressions of the tokens were placed on the outside of the containers, for the count; the shapes of the impressions were abstracted into stylized marks; finally, the abstract marks were systematically used as numerals; these numerals were finally formalized as numbers. Eventually (Schmandt-Besserat estimates it took 4000 years Archived 30 January 2012 at the Wayback Machine ) the marks on the outside of the containers were all that were needed to convey the count, and the clay containers evolved into clay tablets with marks for the count.
  5. ^ Robson, Eleanor (2008), Mathematics in Ancient IraqISBN 978-0-691-09182-2. p. 5: calculi were in use in Iraq for primitive accounting systems as early as 3200–3000 BCE, with commodity-specific counting representation systems. Balanced accounting was in use by 3000–2350 BCE, and a sexagesimal number system was in use 2350–2000 BCE.
  6. ^ Numbers through the ages. Flegg, Graham. Houndmills, Basingstoke, Hampshire: Macmillan Education. 1989. ISBN 0-333-49130-0OCLC 24660570.
  7. ^ The Antikythera Mechanism Research Project Archived 28 April 2008 at the Wayback Machine, The Antikythera Mechanism Research Project. Retrieved 1 July 2007.
  8. ^ G. Wiet, V. Elisseeff, P. Wolff, J. Naudu (1975). History of Mankind, Vol 3: The Great medieval Civilisations, p. 649. George Allen & Unwin Ltd, UNESCO.
  9. ^ Fuat Sezgin “Catalogue of the Exhibition of the Institute for the History of Arabic-Islamic Science (at the Johann Wolfgang Goethe University”, Frankfurt, Germany) Frankfurt Book Fair 2004, pp. 35 & 38.
  10. ^ Charette, François (2006). “Archaeology: High tech from Ancient Greece”. Nature444(7119): 551–552. Bibcode:2006Natur.444..551Cdoi:10.1038/444551aPMID 17136077.
  11. ^ Bedini, Silvio A.; Maddison, Francis R. (1966). “Mechanical Universe: The Astrarium of Giovanni de’ Dondi”. Transactions of the American Philosophical Society56 (5): 1–69. doi:10.2307/1006002JSTOR 1006002.
  12. ^ Price, Derek de S. (1984). “A History of Calculating Machines”. IEEE Micro4 (1): 22–52. doi:10.1109/MM.1984.291305.
  13. ^ Őren, Tuncer (2001). “Advances in Computer and Information Sciences: From Abacus to Holonic Agents” (PDF)Turk J Elec Engin9 (1): 63–70.
  14. ^ Donald Routledge Hill (1985). “Al-Biruni’s mechanical calendar”, Annals of Science 42, pp. 139–163.
  15. ^ “The Writer Automaton, Switzerland”. chonday.com. 11 July 2013.
  16. Jump up to:a b Ray Girvan, “The revealed grace of the mechanism: computing after Babbage”Archived 3 November 2012 at the Wayback MachineScientific Computing World, May/June 2003
  17. ^ Halacy, Daniel Stephen (1970). Charles Babbage, Father of the Computer. Crowell-Collier Press. ISBN 978-0-02-741370-0.
  18. ^ “Babbage”Online stuff. Science Museum. 19 January 2007. Retrieved 1 August 2012.
  19. ^ “Let’s build Babbage’s ultimate mechanical computer”opinion. New Scientist. 23 December 2010. Retrieved 1 August 2012.
  20. Jump up to:a b c d The Modern History of Computing. Stanford Encyclopedia of Philosophy. 2017.
  21. ^ Zuse, Horst. “Part 4: Konrad Zuse’s Z1 and Z3 Computers”The Life and Work of Konrad Zuse. EPE Online. Archived from the original on 1 June 2008. Retrieved 17 June2008.
  22. ^ Zuse, Konrad (2010) [1984], The Computer – My Life Translated by McKenna, Patricia and Ross, J. Andrew from: Der Computer, mein Lebenswerk (1984), Berlin/Heidelberg: Springer-Verlag, ISBN 978-3-642-08151-4
  23. ^ Salz Trautman, Peggy (20 April 1994). “A Computer Pioneer Rediscovered, 50 Years On”The New York Times.
  24. ^ Zuse, Konrad (1993). Der Computer. Mein Lebenswerk (in German) (3rd ed.). Berlin: Springer-Verlag. p. 55. ISBN 978-3-540-56292-4.
  25. ^ “Crash! The Story of IT: Zuse”. Archived from the original on 18 September 2016. Retrieved 1 June 2016.
  26. ^ Rojas, R. (1998). “How to make Zuse’s Z3 a universal computer”. IEEE Annals of the History of Computing20 (3): 51–54. doi:10.1109/85.707574S2CID 14606587.
  27. ^ Rojas, Raúl. “How to Make Zuse’s Z3 a Universal Computer” (PDF).
  28. ^ 15 January 1941 notice in the Des Moines Register,
  29. ^ Arthur W. Burks (1989). The First Electronic ComputerISBN 0472081047.
  30. Jump up to:a b c d Copeland, Jack (2006), Colossus: The Secrets of Bletchley Park’s Codebreaking Computers, Oxford: Oxford University Press, pp. 101–115, ISBN 978-0-19-284055-4
  31. ^ Miller, Joe (10 November 2014). “The woman who cracked Enigma cyphers”BBC News. Retrieved 14 October 2018.
  32. ^ Bearne, Suzanne (24 July 2018). “Meet the female codebreakers of Bletchley Park”the Guardian. Retrieved 14 October 2018.
  33. ^ Bletchley’s code-cracking Colossus, BBC News, 2 February 2010, retrieved 19 October2012
  34. ^ “Colossus – The Rebuild Story”The National Museum of Computing. Archived from the original on 18 April 2015. Retrieved 7 January 2014.
  35. ^ Randell, Brian; Fensom, Harry; Milne, Frank A. (15 March 1995), “Obituary: Allen Coombs”The Independent, retrieved 18 October 2012
  36. ^ Fensom, Jim (8 November 2010), “Harry Fensom obituary”The Guardian, retrieved 17 October 2012
  37. ^ John Presper Eckert Jr. and John W. Mauchly, Electronic Numerical Integrator and Computer, United States Patent Office, US Patent 3,120,606, filed 26 June 1947, issued 4 February 1964, and invalidated 19 October 1973 after court ruling on Honeywell v. Sperry Rand.
  38. ^ Evans 2018, p. 39.
  39. ^ Light 1999, p. 459.
  40. ^ “Generations of Computer”. techiwarehouse.com. Archived from the original on 2 July 2015. Retrieved 7 January 2014.
  41. ^ Turing, A. M. (1937). “On Computable Numbers, with an Application to the Entscheidungsproblem”Proceedings of the London Mathematical Society. 2. 42 (1): 230–265. doi:10.1112/plms/s2-42.1.230.
  42. ^ “von Neumann … firmly emphasized to me, and to others I am sure, that the fundamental conception is owing to Turing—insofar as not anticipated by Babbage, Lovelace and others.” Letter by Stanley Frankel to Brian Randell, 1972, quoted in Jack Copeland (2004) The Essential Turing, p22.
  43. ^ Enticknap, Nicholas (Summer 1998), “Computing’s Golden Jubilee”Resurrection (20), ISSN 0958-7403, archived from the original on 9 January 2012, retrieved 19 April 2008
  44. ^ “Early computers at Manchester University”Resurrection1 (4), Summer 1992, ISSN 0958-7403, archived from the original on 28 August 2017, retrieved 7 July 2010
  45. ^ Early Electronic Computers (1946–51), University of Manchester, archived from the original on 5 January 2009, retrieved 16 November 2008
  46. ^ Napper, R. B. E., Introduction to the Mark 1, The University of Manchester, archived from the original on 26 October 2008, retrieved 4 November 2008
  47. ^ Computer Conservation SocietyOur Computer Heritage Pilot Study: Deliveries of Ferranti Mark I and Mark I Star computers, archived from the original on 11 December 2016, retrieved 9 January 2010
  48. ^ Lavington, Simon. “A brief history of British computers: the first 25 years (1948–1973)”British Computer Society. Retrieved 10 January 2010.
  49. ^ Lee, Thomas H. (2003). The Design of CMOS Radio-Frequency Integrated Circuits(PDF)Cambridge University PressISBN 9781139643771.
  50. ^ Puers, Robert; Baldi, Livio; Voorde, Marcel Van de; Nooten, Sebastiaan E. van (2017). Nanoelectronics: Materials, Devices, Applications, 2 VolumesJohn Wiley & Sons. p. 14. ISBN 9783527340538.
  51. Jump up to:a b c Moskowitz, Sanford L. (2016). Advanced Materials Innovation: Managing Global Technology in the 21st centuryJohn Wiley & Sons. pp. 165–167. ISBN 9780470508923.
  52. ^ Lavington, Simon (1998), A History of Manchester Computers (2 ed.), Swindon: The British Computer Society, pp. 34–35
  53. Jump up to:a b Cooke-Yarborough, E. H. (June 1998), “Some early transistor applications in the UK”Engineering Science & Education Journal7 (3): 100–106, doi:10.1049/esej:19980301ISSN 0963-7346, retrieved 7 June 2009 (subscription required)
  54. ^ Cooke-Yarborough, E.H. (1957). Introduction to Transistor Circuits. Edinburgh: Oliver and Boyd. p. 139.
  55. ^ “1960: Metal Oxide Semiconductor (MOS) Transistor Demonstrated”The Silicon Engine: A Timeline of Semiconductors in ComputersComputer History Museum. Retrieved 31 August 2019.
  56. ^ Motoyoshi, M. (2009). “Through-Silicon Via (TSV)”. Proceedings of the IEEE97 (1): 43–48. doi:10.1109/JPROC.2008.2007462ISSN 0018-9219S2CID 29105721.
  57. ^ “Transistors Keep Moore’s Law Alive”EETimes. 12 December 2018. Retrieved 18 July2019.
  58. ^ “Who Invented the Transistor?”Computer History Museum. 4 December 2013. Retrieved 20 July 2019.
  59. Jump up to:a b Hittinger, William C. (1973). “Metal-Oxide-Semiconductor Technology”. Scientific American229 (2): 48–59. Bibcode:1973SciAm.229b..48Hdoi:10.1038/scientificamerican0873-48ISSN 0036-8733JSTOR 24923169.
  60. ^ “Transistors – an overview”ScienceDirect. Retrieved 8 August 2019.
  61. ^ Malmstadt, Howard V.; Enke, Christie G.; Crouch, Stanley R. (1994). Making the Right Connections: Microcomputers and Electronic InstrumentationAmerican Chemical Society. p. 389. ISBN 9780841228610The relative simplicity and low power requirements of MOSFETs have fostered today’s microcomputer revolution.
  62. ^ Fossum, Jerry G.; Trivedi, Vishal P. (2013). Fundamentals of Ultra-Thin-Body MOSFETs and FinFETsCambridge University Press. p. vii. ISBN 9781107434493.
  63. ^ “Remarks by Director Iancu at the 2019 International Intellectual Property Conference”United States Patent and Trademark Office. 10 June 2019. Retrieved 20 July 2019.
  64. ^ “Dawon Kahng”National Inventors Hall of Fame. Retrieved 27 June 2019.
  65. ^ “Martin Atalla in Inventors Hall of Fame, 2009”. Retrieved 21 June 2013.
  66. ^ “Triumph of the MOS Transistor”YouTubeComputer History Museum. 6 August 2010. Retrieved 21 July 2019.
  67. ^ “The Hapless Tale of Geoffrey Dummer” Archived 11 May 2013 at the Wayback Machine, (n.d.), (HTML), Electronic Product News, accessed 8 July 2008.
  68. ^ Kilby, Jack (2000), Nobel lecture (PDF), Stockholm: Nobel Foundation, retrieved 15 May 2008
  69. ^ The Chip that Jack Built, (c. 2008), (HTML), Texas Instruments, Retrieved 29 May 2008.
  70. ^ Jack S. Kilby, Miniaturized Electronic Circuits, United States Patent Office, US Patent 3,138,743, filed 6 February 1959, issued 23 June 1964.
  71. ^ Winston, Brian (1998). Media Technology and Society: A History : From the Telegraph to the Internet. Routledge. p. 221. ISBN 978-0-415-14230-4.
  72. ^ Saxena, Arjun N. (2009). Invention of Integrated Circuits: Untold Important FactsWorld Scientific. p. 140. ISBN 9789812814456.
  73. Jump up to:a b “Integrated circuits”NASA. Retrieved 13 August 2019.
  74. ^ Robert Noyce‘s Unitary circuit, US patent 2981877, “Semiconductor device-and-lead structure”, issued 1961-04-25, assigned to Fairchild Semiconductor Corporation
  75. ^ “1959: Practical Monolithic Integrated Circuit Concept Patented”Computer History Museum. Retrieved 13 August 2019.
  76. ^ Lojek, Bo (2007). History of Semiconductor EngineeringSpringer Science & Business Media. p. 120ISBN 9783540342588.
  77. ^ Bassett, Ross Knox (2007). To the Digital Age: Research Labs, Start-up Companies, and the Rise of MOS Technology. Johns Hopkins University Press. p. 46. ISBN 9780801886393.
  78. ^ Huff, Howard R.; Tsuya, H.; Gösele, U. (1998). Silicon Materials Science and Technology: Proceedings of the Eighth International Symposium on Silicon Materials Science and TechnologyElectrochemical Society. pp. 181–182.
  79. ^ Kuo, Yue (1 January 2013). “Thin Film Transistor Technology—Past, Present, and Future” (PDF)The Electrochemical Society Interface22 (1): 55–61. doi:10.1149/2.F06131ifISSN 1064-8208.
  80. ^ “1960: Metal Oxide Semiconductor (MOS) Transistor Demonstrated”Computer History Museum.
  81. ^ Bassett, Ross Knox (2007). To the Digital Age: Research Labs, Start-up Companies, and the Rise of MOS TechnologyJohns Hopkins University Press. pp. 22–25. ISBN 9780801886393.
  82. Jump up to:a b “Tortoise of Transistors Wins the Race – CHM Revolution”Computer History Museum. Retrieved 22 July 2019.
  83. ^ “1964 – First Commercial MOS IC Introduced”Computer History Museum.
  84. ^ “1968: Silicon Gate Technology Developed for ICs”Computer History Museum. Retrieved 22 July 2019.
  85. ^ Kuo, Yue (1 January 2013). “Thin Film Transistor Technology—Past, Present, and Future” (PDF)The Electrochemical Society Interface22 (1): 55–61. doi:10.1149/2.F06131ifISSN 1064-8208.
  86. Jump up to:a b “1971: Microprocessor Integrates CPU Function onto a Single Chip”Computer History Museum. Retrieved 22 July 2019.
  87. ^ Colinge, Jean-Pierre; Greer, James C. (2016). Nanowire Transistors: Physics of Devices and Materials in One DimensionCambridge University Press. p. 2. ISBN 9781107052406.
  88. ^ Intel’s First Microprocessor—the Intel 4004, Intel Corp., November 1971, archived from the original on 13 May 2008, retrieved 17 May 2008
  89. ^ The Intel 4004 (1971) die was 12 mm2, composed of 2300 transistors; by comparison, the Pentium Pro was 306 mm2, composed of 5.5 million transistors, according to Patterson, David; Hennessy, John (1998), Computer Organization and Design, San Francisco: Morgan Kaufmann, pp. 27–39ISBN 978-1-55860-428-5
  90. ^ Federico FagginThe Making of the First MicroprocessorIEEE Solid-State Circuits Magazine, Winter 2009, IEEE Xplore
  91. Jump up to:a b “7 dazzling smartphone improvements with Qualcomm’s Snapdragon 835 chip”. 3 January 2017.
  92. ^ Chartier, David (23 December 2008). “Global notebook shipments finally overtake desktops”Ars Technica.
  93. ^ IDC (25 July 2013). “Growth Accelerates in the Worldwide Mobile Phone and Smartphone Markets in the Second Quarter, According to IDC”. Archived from the original on 26 June 2014.
  94. ^ Most major 64-bit instruction set architectures are extensions of earlier designs. All of the architectures listed in this table, except for Alpha, existed in 32-bit forms before their 64-bit incarnations were introduced.
  95. ^ The control unit’s role in interpreting instructions has varied somewhat in the past. Although the control unit is solely responsible for instruction interpretation in most modern computers, this is not always the case. Some computers have instructions that are partially interpreted by the control unit with further interpretation performed by another device. For example, EDVAC, one of the earliest stored-program computers, used a central control unit that only interpreted four instructions. All of the arithmetic-related instructions were passed on to its arithmetic unit and further decoded there.
  96. ^ Instructions often occupy more than one memory address, therefore the program counter usually increases by the number of memory locations required to store one instruction.
  97. ^ David J. Eck (2000). The Most Complex Machine: A Survey of Computers and Computing. A K Peters, Ltd. p. 54. ISBN 978-1-56881-128-4.
  98. ^ Erricos John Kontoghiorghes (2006). Handbook of Parallel Computing and Statistics. CRC Press. p. 45. ISBN 978-0-8247-4067-2.
  99. ^ Flash memory also may only be rewritten a limited number of times before wearing out, making it less useful for heavy random access usage. (Verma & Mielke 1988)
  100. ^ Donald Eadie (1968). Introduction to the Basic Computer. Prentice-Hall. p. 12.
  101. ^ Arpad Barna; Dan I. Porat (1976). Introduction to Microcomputers and the Microprocessors. Wiley. p. 85ISBN 978-0-471-05051-3.
  102. ^ Jerry Peek; Grace Todino; John Strang (2002). Learning the UNIX Operating System: A Concise Guide for the New User. O’Reilly. p. 130ISBN 978-0-596-00261-9.
  103. ^ Gillian M. Davis (2002). Noise Reduction in Speech Applications. CRC Press. p. 111. ISBN 978-0-8493-0949-6.
  104. ^ However, it is also very common to construct supercomputers out of many pieces of cheap commodity hardware; usually individual computers connected by networks. These so-called computer clusters can often provide supercomputer performance at a much lower cost than customized designs. While custom architectures are still used for most of the most powerful supercomputers, there has been a proliferation of cluster computers in recent years. (TOP500 2006)
  105. ^ Even some later computers were commonly programmed directly in machine code. Some minicomputers like the DEC PDP-8 could be programmed directly from a panel of switches. However, this method was usually used only as part of the booting process. Most modern computers boot entirely automatically by reading a boot program from some non-volatile memory.
  106. ^ However, there is sometimes some form of machine language compatibility between different computers. An x86-64 compatible microprocessor like the AMD Athlon 64 is able to run most of the same programs that an Intel Core 2 microprocessor can, as well as programs designed for earlier microprocessors like the Intel Pentiums and Intel 80486. This contrasts with very early commercial computers, which were often one-of-a-kind and totally incompatible with other computers.
  107. ^ High level languages are also often interpreted rather than compiled. Interpreted languages are translated into machine code on the fly, while running, by another program called an interpreter.
  108. ^ It is not universally true that bugs are solely due to programmer oversight. Computer hardware may fail or may itself have a fundamental problem that produces unexpected results in certain situations. For instance, the Pentium FDIV bug caused some Intelmicroprocessors in the early 1990s to produce inaccurate results for certain floating pointdivision operations. This was caused by a flaw in the microprocessor design and resulted in a partial recall of the affected devices.
  109. ^ Taylor, Alexander L., III (16 April 1984). “The Wizard Inside the Machine”TIME. Retrieved 17 February 2007. (subscription required)
  110. ^ Agatha C. Hughes (2000). Systems, Experts, and ComputersMIT Press. p. 161. ISBN 978-0-262-08285-3The experience of SAGE helped make possible the first truly large-scale commercial real-time network: the SABRE computerized airline reservations system …
  111. ^ Leiner, Barry M.; Cerf, Vinton G.; Clark, David D.; Kahn, Robert E.; Kleinrock, Leonard; Lynch, Daniel C.; Postel, Jon; Roberts, Larry G.; Wolf, Stephen (1999). “A Brief History of the Internet”Internet SocietyarXiv:cs/9901011Bibcode:1999cs……..1011L. Retrieved 20 September 2008.
  112. ^ According to the Shorter Oxford English Dictionary (6th ed, 2007), the word computerdates back to the mid 17th century, when it referred to “A person who makes calculations; specifically a person employed for this in an observatory etc.”
  113. ^ “Definition of computer”. Thefreedictionary.com. Retrieved 29 January 2012.
  114. ^ II, Joseph D. Dumas (2005). Computer Architecture: Fundamentals and Principles of Computer Design. CRC Press. p. 340. ISBN 9780849327490.

Notes

External links

 
 

 

142 Comments

  1. Write more, thats all I have to say. Literally, it seems as though you relied on the video to make your point. You clearly know what youre talking about, why throw away your intelligence on just posting videos to your site when you could be giving us something enlightening to read?

    Like