Father of the Internet on Inventing an Internet that is the Fastest Supercomputer | Philip Emeagwali

Father of the Internet on Inventing an Internet that is the Fastest Supercomputer | Philip Emeagwali


TIME magazine called him
“the unsung hero behind the Internet.” CNN called him “A Father of the Internet.”
President Bill Clinton called him “one of the great minds of the Information
Age.” He has been voted history’s greatest scientist
of African descent. He is Philip Emeagwali.
He is coming to Trinidad and Tobago to launch the 2008 Kwame Ture lecture series
on Sunday June 8 at the JFK [John F. Kennedy] auditorium
UWI [The University of the West Indies] Saint Augustine 5 p.m.
The Emancipation Support Committee invites you to come and hear this inspirational
mind address the theme:
“Crossing New Frontiers to Conquer Today’s Challenges.”
This lecture is one you cannot afford to miss. Admission is free.
So be there on Sunday June 8 5 p.m.
at the JFK auditorium UWI St. Augustine. [Wild applause and cheering for 22 seconds] [Inventing a New Supercomputer] I’m Philip Emeagwali.
Back in 1989, I was in the news headlines
and described as the African supercomputer genius
that won top US prize. I won that supercomputer prize
for my contributions to the development of the computer.
I won that prize because I challenged the orthodoxy
of supercomputing only one thing, or solving only one problem, at a time,
or solving the toughest problems in a step-by-step fashion.
I was the first person to figure out how to massively
parallel process across one million processors
and how to solve a grand challenge problem
and how to solve the most challenging problems
across a new internet that is a new supercomputer
and how to solve the toughest problems
at the fastest supercomputer speeds. [School Visits of Philip Emeagwali] I’ve had the experience
of each student of an entire school writing a school report
on my contributions to the development of the computer
and writing the report a week before I visited their school.
During my school visits, I advised the students
that inventing a never-before-seen supercomputer is far more complicated
than a science fair project. I explained to students
that the fastest parallel supercomputer is defined across an ensemble
of over ten million processors that occupies the space of a soccer field.
The fastest supercomputer in the world costs more than the annual budget
of the forty poorest nations in the world. For that reason, it’s impossible
for a student or his entire school or sometimes country of origin
to assembly or program a supercomputer. [A Brief History of Parallel Supercomputing] On February 1, 1922, parallel processing
across a network of 64,000 human computers
was published as a science fiction story. Back in 1958, the term
“parallel computer” was first mentioned in the literature.
Back in 1962, a four-processor supercomputer was manufactured.
At that time, harnessing the total computing power
of eight processors was perceived as the upper limit
for all supercomputers. For sixty-seven years,
progress in the speedup of parallel supercomputers stopped.
Paradigm shifting advances in supercomputing stopped because
it was then impossible to solve a grand challenge problem
and solve the problem by dividing the problem
into a million, or a billion, smaller problems
and then parallel supercomputing that grand challenge problem across
a large ensemble of as many tightly-coupled
commodity-off-the-shelf processors. That barrier against parallel processing
lasted for sixty-seven years and gave rise to the saying that
parallel supercomputing is a beautiful theory
that lacked an experimental confirmation.
Put differently, the practical parallel supercomputer
was a grand challenge, a science fiction, and an idea that was not positively true.
Solving real-world problems, such as forecasting the weather,
and doing so across millions of processors
was science fiction, back in the 1980s and earlier.
In a historic debate that occurred at a computer conference in
Silicon Valley (California) and occurred back in April 18-20, 1967,
Gene Amdahl, the IBM supercomputer czar
of Amdahl’s Law fame, defeated Daniel Slotnick.
Gene Amdahl’s victory gave rise to the ubiquitous supercomputer
term called “Amdahl’s Law.”
In plain language, Amdahl’s Law decreed that parallel supercomputing
will forever remain a huge waste of everybody’s time.
Fast forward twenty-two years after Amdahl’s Law entered into supercomputer
textbooks, I was in the news headlines because
I discovered on July 4, 1989 that the supercomputer textbooks
were wrong and that parallel supercomputing
is not a waste of time. The new parallel supercomputer knowledge
that I discovered is codified into every supercomputer
that has been manufactured since 1989. In the fastest computing
or supercomputing of 1989 and later, a large ensemble of processors
are harnessed and used to solve real-world problems
and do so in parallel. Prior to my discovery
of practical parallel supercomputing, processors that were not a member
of an ensemble of processors were harnessed and used to solve
only one real-world problem at a time. After my discovery,
parallel supercomputing became the paradigm shifting technology
that changed the way we look at the supercomputer.
The supercomputer costs the budget of a small nation
and has an impact on the world that is in proportion to its cost.
The invention of the parallel supercomputer
was the culmination of a seven-decade long journey
that began as a science fiction story. That quest
for the fastest supercomputer ended when I discovered
how to solve, in parallel, one of the toughest problems
arising in physics and mathematics. That science fiction
that was first published on February 1, 1922
became a non-fiction during my parallel processing
laboratory experiment that ended on my Eureka moment of
8:15 in the morning of the Fourth of July 1989
in Los Alamos, New Mexico, United States.
Looking across millions of years of evolution, working together
to solve a grand challenge problem is nothing new in the animal kingdom
where lions hunt in packs to kill an adult elephant.
But solving the toughest problems arising at the cross road
where physics, algebra, calculus, and computing, met,
and solving them across a new internet brought authority and clarity
that ushered a supercomputer that was never-seen-before
and that made the news headlines and that elevated
the massively parallel supercomputer from fiction to non-fiction. [Inventing Practical Parallel Supercomputing] Loosely speaking, parallel supercomputing
is akin to multitasking with millions upon millions
of processors. Humans want to be gods
that perform organ transplants and IVF (or In Vitro Fertilization)
and euthanasia (or dignified death). I believe that
the planetary-sized supercomputer that I first conceived in 1974
and that I described in my lectures of the early 1980s
will take us closer to creating the golden calf,
the God that our post human descendants
of Year Million will worship. Perhaps, the God
post humans could try to revive as their all-powerful ruler.
The deity that will control the minds
of post humans. The deity
that could become the planetary super brain
of our Year Million post-human descendants. [Recognition From President Bill Clinton] My two test-bed problems
were grand challenge initial-boundary value problems
that arose from extreme-scale computational fluid dynamics.
In the general circulation modeling of the motions of fluids
that enshroud the Earth, three-dimensional
circularity exists by definition. But in the simulation
of the motions of the fluids inside a production oilfield
that is the size of a town, the crude oil, injected water,
and natural gas that flows one mile deep
and flows into and from each petroleum reservoir
comes from the petroleum reservoirs that bounded it, except for the first
and the last petroleum reservoirs. Again, I visualized the entire oilfield
that I was simulating and that is the size of a town
as divided into sixty-five thousand five hundred and thirty-six [65,536]
equal-sized oilfields. [Please Allow Me to Introduce Myself] Please allow me to introduce myself.
On August 26, 2000, then U.S. President Bill Clinton
talked about the contribution of Philip Emeagwali
to the development of the computer. In the days following President
Bill Clinton’s televised speech, I was inundated with enquiries
about my contribution to supercomputing.
A few days later, The Guardian newspaper of Nigeria
wrote an eight-page spread on my contributions
to the development of the computer. Despite that eight-page
newspaper profile, I needed eighty thousand [80,000] pages
to fully tell my story. Where do I begin my story?
Do I begin with the laws of physics? Or do I begin
with the technique of calculus? Or do I begin with algebra
that was the bridge between physics and calculus?
At a scientific conference, I introduced myself as a large-scale
computational algebraist that figured out how to put algebra
on the world’s fastest supercomputer and in service to physics and calculus
as well as to society. [Parallel Supercomputing 30,000 Years into
One Day] My discovery
of the parallel supercomputer that occurred on the Fourth of July 1989
that made the news headlines broke new grounds because
I was the first person to figure out how to harness a new internet
that was a new global network of a million, or even a billion,
commodity-off-the-shelf processors that were tightly-coupled
and that were identical to each other. I figured out
how to harness a billion processors and use them
to solve grand challenge problems that were once-impossible to solve.
That parallel supercomputer that was then science fiction
became a reality and a new computing machinery
that was not a computer per se but that was a new internet de facto.
The important lesson is this: we cannot invent a new computer
without also inventing a new computer science.
Nor can we invent a new calculus without also extending
the frontiers of knowledge and doing so at the
interdisciplinary cross road where mathematics and physics met.
The genius is the ordinary person
that found the extraordinary in the ordinary
and her grand challenge is to solve the toughest problem
and do so as if it wasn’t tough. [What is Philip Emeagwali Famous for Inventing?] Back in 1989,
I was in the news headlines because I discovered that the massively
parallel supercomputer is a tool that makes it possible
for us to ask a grand challenge question and get the answer in only one day,
instead of in a thousand centuries. In computational physics,
a problem that takes 180 years of time-to-solution
is a grand challenge problem. That computation-intensive problem
is solvable in 180 years but is unsolvable in only one day. [Struggles to Invent Practical Parallel Supercomputing] [What is a Grand Challenge Problem of Supercomputing?] In supercomputing,
the grand challenge is a problem that traverses the frontiers
of the fields of mathematics, physics, and computer science.
The grand challenge is a problem —such as global warming—that is defined
by the public, or a question that society wants to be answered now.
In my supercomputing quest for how to solve
the grand challenge problem, I discovered
how to lift calculus out of the blackboard
and into the motherboard and across a new internet
that I visualized as a new global network of processors
that is a new computer de facto. [The Calculus of Oil Recovery] I discovered how to lift calculus
out of the mathematics textbook, or the WAEC and JAMB examinations,
and how to put calculus into the mile deep oilfields
that each occupy the space of a town in the Niger Delta region
of southeastern Nigeria. As a research supercomputer scientist,
I see myself as a hunter, or a predator, and my prey
is the unknown. My quest is to know something
nobody else knows. [Early Rejections of My Parallel Supercomputing
Discovery] Back in 1989,
the foremost vector supercomputer scientists couldn’t understand
how I parallel processed across a new internet
that is a new global network of 65,536 tightly-coupled
and identical processors. They couldn’t understand
how I was able to harness that new internet
and use that new technology to record the highest speed increase
of a factor of 65,536. And record the fastest
supercomputer speed that made the news headlines.
At first, my massively parallel processed supercomputer experimental results
of the Fourth of July 1989 was rejected as a [quote unquote] “terrible
mistake.” [Acceptance of My Parallel Supercomputing
at Chicago Conference] After two months of rejections,
and on September 1, 1989, I took my discovery
of practical parallel supercomputing to a fifteen-day hands-on
vector supercomputing workshop that took place
in the largest national laboratory in the Midwest region
of the United States. That national lab was in Chicago, Illinois.
I was invited to Chicago because the parallel supercomputer
could become the enabling technology that must be harnessed when applying
the computation-intensive mathematical physics
that must be used to foresee natural hazards
that threaten lives and livelihoods as well as foresee long-term hazards,
such as global warming that threatens the health
of planet Earth. The parallel supercomputer
is the vital technology that must be used
to supply timely, relevant, and useful information
about the global motions of the fluids that enshroud planet Earth.
The extraordinary importance of that supercomputer-hopeful
was the reason I was invited to the supercomputer workshop
that was sponsored by the Unites States
Department of Energy and that was held
in the fifteen days inclusive of September 1 to 15, 1989.
On my first day at that supercomputer workshop,
I was mocked and ridiculed because I claimed that
I could parallel process across a new internet
that is a new global network of 65,536 commodity processors
that were identical to each other. I was off-handedly dismissed
because I also claimed that I recorded the highest
parallel processed speed increase. I recorded the fastest
supercomputer speed in 1989
and I did so when everybody said that my speed was impossible
to attain across the slowest processors in the world.
But as that supercomputer workshop progressed the supercomputer instructors
learned that I had been supercomputing
since June 20, 1974 in Corvallis, Oregon. They learned that I knew more about
the parallel supercomputer than any supercomputer scientist knew.
At that workshop, I gained credibility as an expert in parallel supercomputing.
With each day that passed, I noticed that I was getting
fewer and fewer mockeries of my parallel supercomputer discovery.
By the 15th and last day of that supercomputer workshop,
the majority of the attendees became convinced
that I had achieved a breakthrough in parallel supercomputing.
Yet, the few naysayers at that 15-day workshop
taunted me to submit my parallel supercomputer calculations
to the four supercomputer scientists and judges
that award the top prize in the field of supercomputing.
Five months later, one of those naysayers
called to congratulate me after he read the story about
the African Supercomputer Genius that won top U.S. prize. [Rejections and Acceptances] [Rejection of My Parallel Supercomputing at
a Washington Conference] In retrospect, the rejections
of my proof-of-principle of 1982 and of my experimental discovery
of 1989 of the parallel supercomputer
gave me street cred as the inventor
of practical parallel supercomputing. I presented my proof-of-principle
in November 1982 and I did so at a conference auditorium
that was a short walk from The White House
in Washington, D.C. The rejection that occurred
on July 4, 1989 of my discovery
of practical parallel supercomputing proved that I invented
a new technology that was vital to the computer
and to the supercomputer. Parallel supercomputing
was first theorized in print as science fiction,
back on February 1, 1922. Practical parallel supercomputing
was born in Los Alamos, New Mexico, United States
and entered into computer textbooks as non-fiction
and did so after I discovered it on July 4, 1989. [Acceptance of My Parallel Supercomputing
at an International Conference] It was at the International
Computer Conference that took place on February 28, 1990
at the then Cathedral Hill Hotel in San Francisco (California)
that practical parallel supercomputing was officially upgraded from
science fiction to reality. At that computer conference,
I was given the top prize in the field of supercomputing
and I was recognized for my contributions
to the development of the computer. My contribution is this:
I upgraded the parallel supercomputer
from science fiction to reality. The rejections that I received
proves one important thing, namely, parallel supercomputing was not obvious
to the leading supercomputer scientists of 1989 and earlier.
The idea of parallel supercomputing first came in print
back on February 1, 1922. But practical parallel supercomputing
remained a grand challenge until I discovered the technology
sixty-seven years later and discovered the technology
across a new internet that is a new global network
of commodity-off-the-shelf processors and discovered the technology
on my Eureka moment of 8:15 in the morning
of July 4, 1989 in Los Alamos, New Mexico,
United States. [A Retrospective on My Parallel Supercomputing] When I began my quest
for the fastest supercomputer technology, I didn’t have
a supercomputer per se. I used a parallel processing ensemble
that’s a new internet de facto and that’s defined and outlined
by 65,536 central processing units that were equal distances apart
from each other and that shared nothing
between each other. The supercomputer community
of 1989 knew that each of my
central processing unit could only execute
forty-seven thousand three hundred and three [47,303] calculations
per second that was the maximum sustained speed
that I attained for the specific grand challenge problem
that I was tasked to solve. In 1989, the leading minds
in the world of the supercomputer —such as Gene Amdahl
and Seymour Cray—insisted that it will forever remain impossible
to divide a grand challenge problem into smaller problems
and then to parallel process that grand challenge problem
and solve it across the slowest processors in the world
and solve that grand challenge problem at the world’s fastest
supercomputer speed. In 1989, I was in the news headlines because
I received the top prize in the field of supercomputing
and because the supercomputing community
deduced that I integrated forty-seven thousand
three hundred and three [47,303] calculations per second
per central processing unit and that I integrated those calculations
across my new internet that is a new global network
of sixty-five thousand five hundred and thirty-six [65,536]
central processing units. My supercomputer experiment
of July 4, 1989 that made the news headlines
enabled me to reach the total supercomputing speed
of 3.1 billion calculations per second that was the fastest speed on record
for solving grand challenge problems. That supercomputer calculation
made the news headlines as a world record
and was recorded in the June 20, 1990 issue of The Wall Street Journal. [My Struggle to Get Credit for Inventing Practical
Parallel Supercomputing] As a black supercomputer scientist
that was born in Nigeria (Africa), I did not have the structural advantages that
put Seymour Cray or Steve Jobs in their leadership positions
and that made it possible for them to claim the credit
for other inventor’s inventions. In the past, it was a tradition
in American science to re-assign the credit
for the invention of a black slave to his white owner.
So it should not come as a surprise for me to say that
vestiges from the past carried over to the present
and that I was blackmailed and threatened
to re-assign the credit for my inventions
to white non-inventors who could not give a public lecture
on my inventions and who’ve known me for seven years but
yet could not even spell or pronounce my last name.
Recently, Seymour Cray, the pioneer of vector supercomputers of the
1970s was re-assigned the credit
for inventing the massively parallel supercomputer.
Yet, it is well documented that Seymour Cray
ridiculed, mocked, and dismissed the parallel supercomputer
as a huge waste of everybody’s time. In the 1980s, Seymour Cray
taunted aspiring parallel supercomputer scientists and did so by asking them:
[quote] “If you were plowing a field,
which would you rather use? Two strong oxen or 1,024 chickens?”
[unquote] Looking back to the 1980s,
some scientists that fought to get the credit for my invention
of practical parallel supercomputing were very self-regarding scientists
and arrogant men with a strong sense of self-entitlement
to getting undeserved credits. Some scientists were asked
to apologize to Philip Emeagwali for plagiarizing, stealing, and publishing
his inventions as their inventions. [The Importance of Supercomputers] [Inventing the Parallel Supercomputer] I’m Philip Emeagwali. The grand challenge problems
of supercomputing are the toughest problems
arising in mathematical physics. The modern supercomputer
is an instrument that must be used to solve the toughest problems,
such as general circulation modelling to foresee otherwise unforeseeable
climate changes. In 1989,
I was in the news headlines because I discovered
how to solve grand challenge problems and how to do so
by dividing each problem into a million smaller problems
and then parallel supercomputing them at lightning-fast speeds
and across as many processors. I was in the news headlines because
I discovered how to solve once-impossible problems and
how to solve them across a new internet
that is a new ensemble of one million processors
that represents the as many brains of a new virtual supercomputer. [The Supercomputer: Then and Now] What sets the new supercomputer apart
from the old computer are these:
The new supercomputer occupies the space of a soccer field.
The new supercomputer have the combined speed
of one million processors. In essence, my supercomputer
is built from the same processor that your computer was built from.
The crucial difference between a supercomputer
and a computer is that the one million processors
inside that supercomputer are integrated
to become one seamless, cohesive, virtual computer
that is super, beyond super. This new supercomputer technology—called
parallel processing—made the news headlines
when I discovered it on the Fourth of July 1989
and discovered it across an ensemble of 64 binary thousand processors
that were identical to each other that were tightly-coupled to each other
and that shared nothing between each other.
A massively parallel supercomputer that must be used to solve
the toughest problem costs up to 1.25 billion dollars
and occupies the space of a soccer field, and must be chilled
to prevent its millions of processors, or brains, from overheating. [The Supercomputer: From Fiction to Fact] In the 1970s and ‘80s
and since February 1, 1922, parallel supercomputing
was on the drawing board. I worked alone
on practical parallel supercomputing and I did so at the time
the technology was mocked, ridiculed, and dismissed
as a huge waste of everybody’s time. For sixty-seven years,
practical parallel supercomputing was a formidable foe
of mathematicians and physicists [Who is Philip Emeagwali?] A teacher asked her students:
“What is Philip Emeagwali noted for?”
My answer is this: I invented
practical parallel supercomputing, the vital technology
inside today’s supercomputers. Until 1989,
using the parallel supercomputer to solve a Grand Challenge Problem
remained in the realm of science fiction. [The Importance of Parallel Supercomputing] Consider this:
At a parallel processed supercomputer speed
of one thousand million billion calculations per second,
it would take a person performing one calculation per second
31.75 billion years, or nearly twice the age of the universe,
to solve a grand challenge problem that would take that supercomputer
only one second to solve. Parallel processing revolutionized
the way we think about the fastest supercomputers
that must solve millions of problems at once, instead of solving
a grand challenge problem and solving it
in the step-by-step, serial processing fashion
that was the prevailing paradigm since the programmable computer
was invented back in 1946. The parallel supercomputer
is a technological revelation that is standing the test of time. [Control of Sensitive Supercomputer Technologies] The parallel supercomputer
is a critical technology that makes the impossible-to-compute
possible-to-compute. For that reason, the stake
on the new supercomputer was so high that the United States added
a Russian supercomputer maker to its list of nuclear threats.
It’s a crime for a U.S. company to sell a supercomputer to a
[quote unquote] “unfriendly nation.” In China, it’s also a crime
to export Chinese supercomputer technology
to the United States. In Japan, it’s also a crime
for me, Philip Emeagwali —who is a non-Japanese—to browse through
the operating instructions of supercomputers
that were made-in-Japan, even though I invented
practical parallel processing as the vital technology
that underpins every supercomputer that is made in Japan.
It’s a crime to export U.S. supercomputers because,
the parallel supercomputer is de facto
a weapon of mass destruction. The fastest supercomputer
provides the horsepower for secretly simulating
nuclear explosions and doing so in parallel
and across an ensemble of millions of commodity-off-the-shelf processors
that were tightly-coupled to each other and that shared nothing
between each other. That was the reason
the United States government classified the parallel processed solution
of initial-boundary value problems to be grand challenge problems.
After all, solving this once-impossible problem transcends abstract
mathematics. The solution of
the grand challenge problem of mathematical physics
is a positive contribution to human progress. [Inventing a New Internet that is a New Supercomputer] [Philip Emeagwali Internet] [From Science Fiction to Reality] What is Philip Emeagwali known for?
My contribution is this: I invented a new internet
that is a new supercomputer. On February 1, 1922, a science fiction story
was published. That science fiction story
described how 64,000 human computers
around the world could work together
to forecast the weather. Sixty-seven years
after that science fiction story and at 8:15 in the morning
of the Fourth of July 1989, I experimentally discovered
how to harness a new internet that is a new global network
of 64 binary thousand processors and how to harness them
to forecast the weather around the world.
My invention—called practical parallel processing—
made the news headlines around the world.
The typical news headline was: “African Supercomputer Genius
Wins Top US Prize.” Back in 1989, I was in the news because
I discovered how to bring that science fiction story
that was published in 1922 to reality. [Philip Emeagwali Supercomputer] I‘m Philip Emeagwali. The supercomputer technology
that I invented is called parallel processing.
A new parallel supercomputer is one that has
a never-before-seen processor-to-processor configuration.
Parallel processing enables millions upon millions
of processors operating inside
the modern supercomputer to communicate and compute
and to do both as one seamless, cohesive unit
that is a virtual supercomputer. Parallel processing
enables the supercomputer that is powered by
one million processors to be one million times faster than
the computer that is powered by
only one processor. The parallel supercomputer moves humanity
forward and into the future.
Hopefully, as we move forward by parallel processing across
a global network of computers around the Earth
then our children’s children could build
their planetary-sized supercomputer that could someday become
one and the same thing as their Internet.
Today, parallel processing is vital to every supercomputer manufactured
and may become vital to every computer of the future.
I was in major newspapers because I experimentally discovered
how and why parallel processing makes
the modern computer faster and makes the modern supercomputer fastest. [Inventing Philip Emeagwali Internet] Please allow me to take
a retrospective look on how I discovered
how to program a new internet, named the Philip Emeagwali Internet.
That new internet is a new global network of
sixty-five thousand five hundred and thirty-six [65,536]
processors, or 65,536
identical computers. I visualized those processors
as equal distances apart and on the surface of a globe
within a sixteen-dimensional hyperspace. I discovered
how to program that new internet as one seamless, cohesive
whole supercomputer that was not a computer per se
but that was a virtual supercomputer de facto.
I discovered how to program that new internet
as a new supercomputer and how to email and control
my 65,536 processors and how to do both
without seeing or touching any of those processors.
I discovered how to program and harness the processors
that outline and define that new internet, and control them blindfolded.
I discovered how to divide a grand challenge problem
into one million smaller problems and how to solve
those one million challenging problems at once. [Wholesale Plagiarism of the Philip Emeagwali
Internet] Some scientists tried to position themselves
to take the credit for my inventions. A few succeeded.
I have documentations to prove the complete plagiarism
of my invention. A parallel processed supercomputer
that has thousands of processors encircling a globe
that I invented alone was plagiarized
by a team of season researchers that received United States
federal funding to do so. I distinguish between intentional
and unintentional plagiarism. Those researchers
stole my invention in its entirety and did not contribute to my invention.
Those researchers merely removed my name
as the inventor of the Philip Emeagwali Supercomputer
and put their names as its inventors.
As a supercomputer inventor that came of age in the 1980s,
I felt like the songwriter that was not credited for the songs
that he wrote. And I felt like the painter
that was not allowed to sign his name on his original paintings.
For me, the toughest part about being a black inventor
is getting the full credit for the new supercomputer
that I invented alone. Back in 1989, I was perceived
as a difficult person to work with and perceived as such
by research scientists who never worked with me.
The reason was that I made it impossible for someone else
to take the credit away from me and do so
for the practical parallel supercomputing technology
that I invented alone. I wasn’t a difficult person
to work with. Those researchers that tried
to steal the credit from me were difficult persons to work with. [Inventing a New Supercomputer that is a New
Internet] [The Grand Challenge Problem] That problem
of how to increase the speed of the modern supercomputer
and increase that speed by a factor of one million
was the grand challenge problem of mathematical physics
that was posed back in February 1, 1922
and that I solved on July 4, 1989.
I discovered how to solve in only one day
and across that new internet that is a parallel supercomputer
and how to solve a grand challenge problem
that would have taken millennia upon millennia
of time-to-solution on one computer. [From Science Fiction to Reality] For sixty-seven years
onward of February 1, 1922, parallel processing was abandoned
by supercomputer experts. Supercomputer textbooks
dismissed parallel processing as science fiction.
Parallel supercomputing was ridiculed as a huge waste of everybody’s time.
Because my massively parallel supercomputer
was an unconventional technology and a new internet,
I used an unorthodox technique to send and receive
my sixty-five thousand five hundred and thirty-six [65,536] computational
physics codes that I had to email
to my as many processors of my new internet.
Deep inside the parallel supercomputer, the email
is the recurring decimal across each pair
of bi-directional email wires that connects nearest-neighboring processors
that shared nothing. I had sixty-five thousand
five hundred and thirty-six [65,536] unique email addresses
for as many processors. Each processor
operated its own operating system. Each email address was sixteen bits long,
or a unique string of sixteen zeroes and ones.
My email addresses within that new internet
were unorthodox because they had no “at” (@) signs
or dot com suffixes. Their “at” signs and suffixes
were unnecessary because I knew where my
sixty-five thousand five hundred and thirty-six [65,536]
processors that outlined and defined
my new internet were at. [Proof-of-Principle of Parallel Supercomputing] My discovery
of practical parallel supercomputing began with some
back-of-the-envelope calculations and a few proof-of-principle lectures
in which I presented my realizations that a million processors
could in principle be harnessed to solve the toughest problems
arising in mathematics, physics, and computer science.
My proof-of-concept was small and was not complete.
I gave my proof-of-principle lectures back in the early 1980s.
At that time, parallel processing across sixteen times
two-raised-to-power sixteen, or one binary million,
bi-directional email wires was the frontier
and the unknown territory and the science fiction
of the world of the supercomputer. Sixty-seven years earlier,
a meteorologist presented as science fiction the story of
64 thousand human computers working together to solve
a grand challenge problem, such as forecasting the weather
around the entire planet Earth. That science fiction story
was published back on February 1, 1922. But unlike the science fiction writer,
I, as a non-fiction research massively parallel
supercomputer scientist, had a very limited number of words
that I can use to describe how I emailed
my initial-boundary value problems of mathematical physics
and how I sent them across a never-before-seen internet
that is a new global network of 64 binary thousand processors.
As a supercomputer scientist, I’m different
from the science fiction writer because the starting point
of the science fiction writer is a blank page
plus the unlimited fictional stories that she can conjecture
and that she can use to populate her blank pages. [The First Supercomputer Scientist] But as the first parallel supercomputer scientist
that started his quest for the solution
of grand challenge problems and started on a conventional supercomputer
and started on June 20, 1974 in Corvallis, Oregon (United States)
and continued almost daily for fifteen years
and continued to the capital of supercomputing, namely, Los Alamos, New
Mexico (United States), and as that first modern
supercomputer scientist, I couldn’t make up fictional stories
that could not be reconfirmed by each and every
subsequent parallel supercomputer scientist. [Inventing Philip Emeagwali Supercomputer] As the non-fiction
supercomputer scientist that I was, I could not write a word
about practical parallel supercomputing and do so until I, first and foremost,
divided my grand challenge problem into smaller, less challenging problems
and then synchronously emailed them across my one binary million
email wires that interconnected my processors
and then simultaneously solved them with the one-to-one
problem-to-processor correspondence that I maintained
between my 64 binary thousand smaller mathematical physics problems
and my as many commodity-off-the-shelf processors
that shared nothing between each other. My first discovery
of practical parallel supercomputing that occurred on the Fourth of July 1989
was rejected as a [quote unquote] “terrible mistake.”
Back in 1989 and earlier, practical parallel supercomputing
was mocked, ridiculed, and rejected as a beautiful theory that lacked
an experimental confirmation. My supercomputing quest
was to experimentally confirm massively parallel supercomputing
and re-confirm it to a speed limit
that was never-before-attained, namely, across a never-before-seen internet
that was my new global network of 65,536 tightly-coupled,
commodity-off-the-shelf processors. For that invention
of a new supercomputer, I used the toughest problems
in mathematical physics as my computational testbed. [Solving an Unsolved Problem in Mathematics] [The Grand Challenge Problem of Mathematics] The poster girl
of the twenty grand challenge problems is the petroleum reservoir simulation
of a production oilfield that may be two miles
below the surface of the Earth and the size of a town.
The reason one in ten supercomputers were purchased
by the petroleum industry was that the parallel processed
petroleum reservoir simulator helps the oil company
to discover and recover as much crude oil and natural gas
as is possible and to recover them
as long as possible as well as to compute them
at a supercomputer speed that was previously believed
to exist only in the realm of science fiction.
The speed increase of a factor of 65,536 that I recorded on July 4, 1989
was dismissed as science fiction and I was disinvited
from giving my lecture on how I discovered
practical parallel supercomputing. [My First Unveiling of Practical Parallel
Supercomputing] My discovery
of the practical parallel supercomputing was rejected as [quote unquote]
“a serious mistake.” After two months of continuous rejections
of my discovery of massively parallel supercomputing,
I went in search of re-confirmation of my discovery.
I was compelled to provide expert eye-witnesses
to my discovery of the practical parallel supercomputing.
My first stop was at a 15-day long supercomputer workshop
that took place from September 1 to 15, 1989
and in Chicago, United States. During that supercomputer workshop,
I spent the first fifteen days building the trust and confidence
of the supercomputer workshop instructors and participants
who at that time did not know who I was.
From my contributions to the workshop discussions
on how to record the fastest speeds within the parallel supercomputer,
the instructors realized that I had been supercomputing
for the past fifteen years and that they knew less than I did.
On the fifteenth and last day of that supercomputer workshop,
I suddenly cleared my throat and made the announcement
that brought me to Chicago, namely, that I’ve discovered
practical parallel processing. You could hear a pin drop
in the room as everybody gazed at me
in stunned silence! For the first time, since June 20, 1974,
in Corvallis, Oregon, United States, a group of supercomputer scientists
attentively listened to me as I explained to them
how I discovered how to massively parallel process
across 65,536 processors that each operated
its own operating system. I discovered
how to reduce the calculation time of the twenty grand challenge problems of
supercomputing. I discovered
how to reduce that time-to-solution and do so with a speed up of 65,536.
Before September 15, 1989, my speed up of 65,536 days,
or 180 years, of time-to-solution to just one day
existed only in the realm of science fiction.
For me, Philip Emeagwali, that Eureka Moment! in Chicago
was surreal. After my announcement
at that supercomputer workshop of my discovery
of practical parallel supercomputing it was so quiet
that you could hear a pin drop in the room.
The supercomputer scientists attending that Chicago workshop
challenged me to submit my discovery
to the highest authority in supercomputing.
That highest authority was The Computer Society
of the IEEE. The IEEE is the acronym
for the Institute of Electrical and Electronics Engineers.
In late December 1989, The Computer Society
re-confirmed my discovery of practical parallel supercomputing.
The Computer Society invited me to come to the forthcoming
International Computer Conference that will take place on February 28, 1990
in San Francisco, California. Two months prior to that conference,
the Computer Society of the IEEE sent out a press release
that recognized my contributions to [quote unquote]
“practical parallel processing.” In their press release,
the Computer Society announced that I have won
the highest award in the field of supercomputing. [Philip Emeagwali is Well Known, But Not Known
Well] I’m well known
but not known well. I’m well known
for inventing a new internet that is a new supercomputer de facto
and that is a new global network of sixty-five thousand
five hundred and thirty-six [65,536] processors
that were tightly-coupled to each other and that shared nothing
between each other. I’m well known for figuring out
how to harness the processors within that new internet
and how to use that new knowledge to solve initial-boundary value problems arising
in mathematical physics that were otherwise impossible-to-solve.
But I am not known well for foreseeing my discovery
as, de facto, a new internet. I’m well known
for experimentally discovering, or recording speeds
in floating-point arithmetical computations that were previously unrecorded.
But I am not known well for using email communications
across that new internet to record communication speeds
that were previously unrecorded. But I am not known well
for discovering, or seeing for the first time,
those supercomputer speeds and recording them across
my new internet. But I am not known well
for changing the way we look at the modern computer
and the modern supercomputer. After the Fourth of July 1989,
I became known for the experimental discovery
of parallel supercomputing. That discovery
made the news headlines because it was beyond theory
and beyond the computer and because it was specific,
quantifiable, and measurable. Every new technology
has a starting point. Parallel processing
is the starting point of the modern supercomputer. [The Importance of Supercomputing] [I Was Dismissed From Supercomputing Research
Teams] In the 1970s and ‘80s,
the supercomputer-hopeful technology, called parallel processing,
was mocked, ridiculed, and dismissed as a huge waste of everybody’s time.
Today, parallel processing is universally used
to reduce the time-to-solution of the toughest problems
arising in the field of supercomputing. Parallel processing is used
to increase the speed of the fastest computers
and all supercomputers. My discovery
of practical parallel processing was how I entered as a benchmark
into the history of the development of the computer and the internet. [Inventor Reports on Philip Emeagwali] In U.S. public libraries,
I see 12-year-olds writing school reports on the contributions
of Philip Emeagwali to the development of the computer.
I entered into school curricula after my discovery
of practical parallel supercomputing. That discovery occurred
on the Fourth of July 1989 in Los Alamos, New Mexico,
United States. My discovery
of practical parallel supercomputing made the news headlines because
it was new knowledge that changed the way
we look at the supercomputer. My discovery
of practical parallel supercomputing was recorded
in the June 20, 1990 issue of the Wall Street Journal.
At its core essence, parallel supercomputing
is about one billion processors computing together
to solve one big problem. Parallel supercomputing
is the vanguard of computer science. The parallel supercomputer
is the engine that is used to discover new knowledge
and solve grand challenge problems arising in STEM fields. [Contributions to the Supercomputer] My contribution
to the development of the computer is this:
I discovered that we can parallel process
and solve grand challenge problems arising in mathematics and physics
and solve them across a new internet that is a new global network
of commodity-off-the-shelf processors that shared nothing between them.
I paradigm shifted from computing only one thing at a time,
or in sequence, to supercomputing one million things
at once, or in parallel. I was the first person
to solve a grand challenge problem and solve it by dividing it
into smaller problems and communicating them via emails
to sixty-five thousand five hundred and thirty-six [65,536]
processors. I was the first person
to solve as many as sixty-five thousand five hundred and thirty-six [65,536]
parallel processed initial-boundary value problems
of mathematical physics and solved them at once.
My discovery, called practical parallel processing,
is the vital technology that must be used to solve
the toughest problems arising science and engineering
and used to solve them in minimum time. [Supercomputing From Fiction to Fact] When I began sequential supercomputing,
on June 20, 1974 at age 19, parallel supercomputing
then only existed in the realm of science fiction.
For the sixty-seven years, onward of February 1, 1922,
parallel supercomputing only existed as an urban legend
of the mathematical physics community. My parallel supercomputing experiment
made the news headlines, back in 1989.
But my discovery of the fastest computer speed
was not newsworthy for pushing the boundaries
of how fast supercomputers could compute.
My discovery was newsworthy because I discovered the fastest speeds across
a new internet that I described as a new global network of
65,536 processors that tightly-encircled a globe.
That discovery enabled the supercomputer
to be true to its vital technology that is named
“parallel processing.” Parallel processing revolutionized
the field of supercomputing by giving it new horizons
that ranges from the mathematician’s blackboard
to the engineer’s drawing board. The serial processed weather forecast
is unpredictable. We parallel process
the grand challenge problem of weather forecasting
to make unpredictable weather predictable. [Importance of Parallel Computing in Your
Everyday Life] The speed of a computer
can be increased by packing more transistors on chips
and/or putting more central processing units
and graphics processing units and using them
as identical cores and nodes of a global network of processing units
that are equal distances apart and that are on the surface of a globe.
Why is the supercomputer of today much faster than
the supercomputer of 1988, and earlier? The modern supercomputer is faster because
its underlying parallel processing units
did the supercomputing. The processor
is the brain of the computer. In the modern computer,
the serial kernel of an application code is computed within
a few central processing units that each computed
only one thing at a time. In the modern supercomputer,
the parallel kernel of an application code is parallel computed within
the graphics processing unit that computed many things at once,
or in parallel. The graphics processing unit
is a parallel processing tool that is used by
the central processing unit to perform faster computations
just like the central processing unit is a sequential processing tool
that is used by the sequential processing human computer
to perform faster computations. The graphics processing unit
is a massively parallel machine, and its presence inside your computer
redefined your computer as parallel processing.
The graphics processing unit computes in parallel,
or computes many things at once. The graphics processing unit
computes the computation-intensive kernel
of your application and did so
when that kernel could be parallelized. The few cores within
the central processing unit serially computed the portion
of the computation-intensive physics code
that could not be parallelized. If the central processing unit
is the brain of your computer then the graphics processing unit
is the soul of your computer. [Inventing a New Computer Science] [New Paradigm of Supercomputing] The word “computer”
was coined two thousand years ago when it was first used
by the Roman author Pliny the Elder. For two thousand years,
the word “computer” referred to a human computer
that computes manually, rather than to a programmable
electronic machine that computes automatically.
When the mid-20th century British logician, Alan Turing,
and his contemporaries wrote about the [quote unquote] “computer,”
they meant a human computer, not an electronic machine that computes.
The meaning of the word “computer” changed in 1946,
when the terminology [quote unquote] “programmable digital computer”
was shortened to “computer”. For my 1989 discovery
of practical parallel processing, the technology that underpins
every modern supercomputer, I had to redefine
the “programmable digital computer.” I redefined the technology
because I discovered how to divide a grand challenge problem
into smaller problems and how to solve them across
my new internet that is a new global network of
65,536 commodity processors. Each processor
operated its own operating system. As predicted in the June 20, 1990 issue
of the Wall Street Journal, my experimental discovery
of practical parallel processing, opened the door
to the modern supercomputer technology that is harnessed and used
to solve real world problems and solve them across
central processing units that accelerate their speeds
of computation and do so with identical
graphics processing units. As a supercomputer scientist
that came of age in the 1970s and ‘80s,
I thought of the supercomputer differently. Conventional supercomputer scientists
programmed vector supercomputers and believed that
the fastest computations could only be recorded
on one central processing unit that’s a vector unit. In the old paradigm of supercomputing,
they thought of the supercomputer in the singular sense,
or solving only one problem at a time. In my new paradigm of supercomputing,
I thought of the supercomputer in the plural sense of 65,536
identical central processing units and as many identical graphics processing
units. Back in 1989,
I was in the news headlines because I experimentally discovered
how to use those units to solve 65,536 problems at once.
My discovery opened the door to the present technology of using
graphics processing units, where possible,
and using them to accelerate the speed of the floating-point arithmetical
operations that must be executed
by the modern parallel supercomputer. My experimental discovery
of how to parallel process and do so to solve the toughest problems
and do so across a new global network of
65,536 processors was achieved across a new internet.
The supercomputer of today will become the computer of tomorrow.
The supercomputer is at once able to define our past,
recreate our present, and reinvent our future. [How I Discovered Practical Parallel Supercomputing] The supercomputer technology
called massively parallel processing that was mocked
as a very useless technology is now the front and the center
of high-performance computing and is rapidly moving into laptops
and desktops. Until the Fourth of July 1989,
parallel processing was not verified by any experiment
that was conducted across an ensemble of thousands of processors
and that used a real-world grand challenge problem
as its computational testbed. My contribution to the development
of the computer is this: On the Fourth of July 1989
in Los Alamos, New Mexico, United States, I provided the lockdown
experimental evidence that the technology of
massively parallel supercomputing can be harnessed
and used to solve the toughest problems arising from mathematics to medicine
and from science to engineering. I, alone, conducted
that time-consuming experimentation that led to my discovery of the best way
to get millions of processors to solve the toughest problems
and to move humongous data into and out of storage
and to solve them in harmony and as one seamless, cohesive supercomputer.
The electricity budget of the email messaging
that is a precondition to moving data into and out of
millions upon millions of processors raises the electricity bill
to up to forty (40) million dollars per year,
and eventually costs more than the next world’s fastest computer
that will cost the United States six hundred (600) million dollars
in the year 2023. The world’s fastest computer
consumes as much electricity as two million Nigerians. Thank you. I’m Philip Emeagwali.
[Wild applause and cheering for 17 seconds] Insightful and brilliant lecture

One thought on “Father of the Internet on Inventing an Internet that is the Fastest Supercomputer | Philip Emeagwali

  1. I'm Philip Emeagwali. My contribution to the development of the computer is this: I discovered that we can parallel process and solve grand challenge problems arising in mathematics and physics and solve them across a new internet that is a new global network of commodity-off-the-shelf processors that shared nothing between them. I paradigm shifted from computing only one thing at a time, or in sequence, to supercomputing one million things at once, or in parallel. I was the first person to solve a grand challenge problem and solve it by dividing it into smaller problems and communicating them via emails to sixty-five thousand five hundred and thirty-six [65,536] processors.
    I was the first person

    to solve as many as sixty-five thousand

    five hundred and thirty-six [65,536]

    parallel processed

    initial-boundary value problems

    of mathematical physics

    and solved them at once.

    My discovery,

    called practical parallel processing,

    is the vital technology

    that must be used to solve

    the toughest problems

    arising science and engineering

    and used to solve them

    in minimum time.

    Supercomputing From Fiction to Fact

    When I began sequential supercomputing,

    on June 20, 1974 at age 19,

    parallel supercomputing

    then only existed

    in the realm of science fiction.

    For the sixty-seven years,

    onward of February 1, 1922,

    parallel supercomputing only existed

    as an urban legend

    of the mathematical physics community.

    My parallel supercomputing experiment

    made the news headlines,

    back in 1989.

    But my discovery

    of the fastest computer speed

    was not newsworthy

    for pushing the boundaries

    of how fast

    supercomputers could compute.

    My discovery was newsworthy because

    I discovered the fastest speeds across

    a new internet that I described

    as a new global network of

    65,536 processors

    that tightly-encircled a globe.

    That discovery

    enabled the supercomputer

    to be true to its vital technology

    that is named

    “parallel processing.”

    Parallel processing revolutionized

    the field of supercomputing

    by giving it new horizons

    that ranges from

    the mathematician’s blackboard

    to the engineer’s drawing board.

    The serial processed weather forecast

    is unpredictable.

    We parallel process

    the grand challenge problem

    of weather forecasting

    to make unpredictable weather predictable.

    Importance of Parallel Computing in Your Everyday Life

    The speed of a computer

    can be increased

    by packing more transistors on chips

    and/or putting more

    central processing units

    and graphics processing units

    and using them

    as identical cores and nodes

    of a global network of processing units

    that are equal distances apart

    and that are on the surface of a globe.

    Why is the supercomputer of today

    much faster than

    the supercomputer of 1988, and earlier?

    The modern supercomputer is faster because its underlying

    parallel processing units

    did the supercomputing.

    The processor

    is the brain of the computer.

    In the modern computer,

    the serial kernel of an application code

    is computed within

    a few central processing units

    that each computed

    only one thing at a time.

    In the modern supercomputer,

    the parallel kernel of an application code

    is parallel computed within

    the graphics processing unit

    that computed many things at once,

    or in parallel.

    The graphics processing unit

    is a parallel processing tool

    that is used by

    the central processing unit

    to perform faster computations

    just like the central processing unit

    is a sequential processing tool

    that is used by the sequential processing

    human computer

    to perform faster computations.

    The graphics processing unit

    is a massively parallel machine,

    and its presence inside your computer

    redefined your computer

    as parallel processing.

    The graphics processing unit

    computes in parallel,

    or computes many things at once.

    The graphics processing unit

    computes

    the computation-intensive kernel

    of your application

    and did so

    when that kernel could be parallelized.

    The few cores within

    the central processing unit

    serially computed the portion

    of the computation-intensive

    physics code

    that could not be parallelized.

    If the central processing unit

    is the brain of your computer

    then the graphics processing unit

    is the soul

    of your computer.

    Inventing a New Computer Science

    New Paradigm of Supercomputing

    The word “computer”

    was coined two thousand years ago

    when it was first used

    by the Roman author Pliny the Elder.

    For two thousand years,

    the word “computer”

    referred to a human computer

    that computes manually,

    rather than to a programmable

    electronic machine

    that computes automatically.

    When the mid-20th century

    British logician, Alan Turing,

    and his contemporaries

    wrote about the [quote unquote] “computer,”

    they meant a human computer,

    not an electronic machine that computes.

    The meaning of the word “computer”

    changed in 1946,

    when the terminology [quote unquote]

    “programmable digital computer”

    was shortened to “computer”.

    For my 1989 discovery

    of practical parallel processing,

    the technology that underpins

    every modern supercomputer,

    I had to redefine

    the “programmable digital computer.”

    I redefined the technology

    because I discovered

    how to divide a grand challenge problem

    into smaller problems

    and how to solve them across

    my new internet

    that is a new global network of

    65,536 commodity processors.

    Each processor

    operated its own operating system.

    As predicted in the June 20, 1990 issue

    of the Wall Street Journal,

    my experimental discovery

    of practical parallel processing,

    opened the door

    to the modern supercomputer technology

    that is harnessed and used

    to solve real world problems

    and solve them across

    central processing units

    that accelerate their speeds

    of computation

    and do so with identical

    graphics processing units.

    As a supercomputer scientist

    that came of age

    in the 1970s and ‘80s,

    I thought of the supercomputer differently.

    Conventional supercomputer scientists

    programmed vector supercomputers

    and believed that

    the fastest computations

    could only be recorded

    on one central processing unit

    that’s a vector unit.

    In the old paradigm of supercomputing,

    they thought of the supercomputer

    in the singular sense,

    or solving only one problem at a time.

    In my new paradigm of supercomputing,

    I thought of the supercomputer

    in the plural sense of 65,536

    identical central processing units

    and as many identical graphics processing units.

    Back in 1989,

    I was in the news headlines because

    I experimentally discovered

    how to use those units

    to solve 65,536 problems at once.

    My discovery opened the door

    to the present technology of using

    graphics processing units,

    where possible,

    and using them to accelerate

    the speed of the floating-point arithmetical operations

    that must be executed

    by the modern parallel supercomputer.

    My experimental discovery

    of how to parallel process

    and do so to solve the toughest problems

    and do so across

    a new global network of

    65,536 processors

    was achieved across a new internet.

    The supercomputer of today

    will become the computer of tomorrow.

    The supercomputer is at once

    able to define our past,

    recreate our present,

    and reinvent our future.

    How I Discovered Practical Parallel Supercomputing

    The supercomputer technology

    called massively parallel processing

    that was mocked

    as a very useless technology

    is now the front and the center

    of high-performance computing

    and is rapidly moving into laptops

    and desktops.

    Until the Fourth of July 1989,

    parallel processing was not verified

    by any experiment

    that was conducted across

    an ensemble of thousands of processors

    and that used a real-world

    grand challenge problem

    as its computational testbed.

    My contribution to the development

    of the computer is this:

    On the Fourth of July 1989

    in Los Alamos, New Mexico,

    United States, I provided the lockdown

    experimental evidence

    that the technology of

    massively parallel supercomputing

    can be harnessed

    and used to solve the toughest problems

    arising from mathematics to medicine

    and from science to engineering.

    I, alone, conducted

    that time-consuming experimentation

    that led to my discovery of the best way

    to get millions of processors

    to solve the toughest problems

    and to move humongous data

    into and out of storage

    and to solve them in harmony

    and as one seamless, cohesive supercomputer.

    The electricity budget

    of the email messaging

    that is a precondition

    to moving data into and out of

    millions upon millions of processors

    raises the electricity bill

    to up to forty (40) million dollars

    per year,

    and eventually costs more than

    the next world’s fastest computer

    that will cost the United States

    six hundred (600) million dollars

    in the year 2023.

    The world’s fastest computer

    consumes as much electricity

    as two million Nigerians.

    Thank you.

    I’m Philip Emeagwali.

Leave a Reply

Your email address will not be published. Required fields are marked *