The apply that knowledge, the brain of humans, has

The idea that humans may be approaching a “singularity” as a result of increasingly rapid technological advancements has moved into the orb of serious debate and discoveries. In physics, a singularity is a point in space and time- such as the core of a black hole ir1 or the instant of the Big Bang. Based on analogy, a singularity in human history would happen if technology progression brought about such dramatic change that human happenings as we understand them today came to an end. The economy, the government, the law, and the state- it would initially be impossible for them to survive in the present state. Basic and important human values such as: the sanctity of life, the pursuit and meaning of happiness, the freedom to choose- these would be displaced sooner or later. Our understanding of being human: to be an individual, to be alive, to actually be conscious, and to be integrated within the social order- all this would be questioned.

The hypothesis I will explore and examine relate to the fields of AI and advancements in technology’s effects on the technological singularity. Of course, the human knowledge has been increasing for a long time, and our ability to apply that knowledge, the brain of humans, has remained unchanged. If the intellect becomes, not just the creator, but also the creation of technology, then a feedback cycle with unpredictable and potentially explosive consequences. Before long, according to the singularity hypothesis, the ordinary human is removed from the cycle, eclipsed by artificially intelligent machines or even cognitively enhanced biological intelligence and therefore won’t be able to keep pace.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

Does the technological singularity hypothesis deserve to be taken seriously?  Based on what Ray Kurzweil calls the “law of accelerating returns”ir2 , an area of technology is subject to the law of accelerating returns if the rate at which the technology improves is equal to how developed or good the technology is. The better the technology is, the faster it gets better, meaning rising development over time.

An example of this theory is Moore’s lawA3 A4 , according to which the number of transistors that can be invented on a single chip doubles every one year to eighteen months. The semiconductor industry has managed to follow this law for several decades. CPU clock speed for example has followed this law. Information technology isn’t the only area where we see progress. In the medicine industry, for example, DNA sequencing has fallen in cost while increasing in speed, and the technology of brain scanning has benefited an advancing increase in its resolution. ir5 

Based on a timeline, these trends can be seen applied to a series of innovations o do with technology happening at decreasing breaks: agriculture, printing, the computer, and electricity. On an even longer timeline: eukaryotes, vertebrates, primates and Homo sapiens. These facts have led some to view the human race as riding on the curve of a growing complexity that stretches into the past. We need to only decrease the technological portion of the curve a little way in the future to reach an important game changer, the point where human technology renders the normal technologically outdated.

Normally, every exponential technological trend must level out eventually, in regards to the laws of physics, and there is any number of economic, political, or scientific reasons why that trend may stop before reaching its theoretical yield. However, let us assume that technological trends most relevant to AI maintain the momentum, triggering the ability to engineer the stuff of mindir6 , to create the machinery of brainpower. At this point, the machinery of intelligence, imitation or human, would become subject to the law of accelerating returns, and here to a technological singularity doesn’t need much.

Some predict the breakthrough will occur in the middle of the 21st century (including Ray Kurzweil)A7 . But there are other reasons for thinking through the idea of singularity other than a prediction. First, the simple concept is very interesting from a knowledgeable point of view, regardless of when and where and how it may occur. Second, the actual possibility, however, remote it may be envisioned as, merits discussion today on logical and strictly rational grounds. Even if the arguments of the futurists have faults, we need only assign a small probability to the anticipated even. As for the consequences of humanity, if a technological singularity did end up happening, consequences would be huge. What could the possibly profound consequences be? What sort of world or universe, might come into being if a technological singularity did occur? Should we fear or should we welcome it? What, no matter what, can potentially be done today or in the near future that would guarantee or provide a glimmer of hope for the best outcome? There are larger questions but the theory itself, even just the concept, of the singularity, may shed new light into prehistoric philosophical question that are perhaps even larger. How should we be living? What are we willing to give up? The possibility of technological singularity poses both an existential risk and opportunity.

The principle of artificial intelligent bodies who have different priorities compared to their creator have been ongoing for years. For example, the technological singularity could be seen as an existential opportunity. Capability to engineer the stuff of mind opens up the door of possibly transcending our biological heritage, therefore, overthrowing its limitations. Issues such as, malnourishment or undernourishment, diseases that have taken several lives over the past century, this would all be a chain reaction leading into more detailed research of the human body and its cognitive and biological nature.

 

Advantages of technological singularity approaching humanity is the benefits that come under its wing. One of which would include the investigating of the human body and all its mysteries with advanced equipment paired to chemical and biological discoveries. Questions that have remained unknown for decades upon decades could potentially be answered to maybe a full extent. So far our knowledge has been best applied to outer space rather than our own earth and the 8 billion bodies on its surface which leads us to talk about whole brain emulsion (WBI)ir8 . A brief outline of this idea is inserting a copy/copies of a brain in a non-biological substrate via a computational method.  Understanding details will need the understanding of some basic neuroscience and its functions. The brain, comprises of many cells, like every other organ in an animal’s body. Most of the cells are neurons, which count as impressive electrical devices, each neuron capable of fast signal processing. A neuron consists of a cell body, the soma, an axon and dendrites. The dendrites can be conceptualised as the neurons are the input and the axon as the output whilst the soma does the signal handling.

Neurons are interconnected as they form a complex network within our whole body. Both axons and dendrites are identical to trees, with several branches weaving along with the axons and dendrites of other neurons surrounding it. A synapse can form at points where the axon of one neuron is nearing a dendrite.  As a result of complex exchange of chemicals, a synapse enables signals to transfer from one neuron to another with allows communication within that space. Neurons however aren’t only confined to the central nervous system which contains the brain and spinal cord.  It is also connected to the peripheral nervous system. This system consists of every nerve that does not come under the brain and spinal cord, in other words, nerves outside of the central nervous system. This system is distributed into two sections. The sensory division and the motor division. These are mostly accountable for our senses and the working of our glands and muscles. Activity in the brain result from chemical and electrical activity. This outline has yet to even scratch the surface of what we know so far about the brain and what we do know barely scratches the surface of what there is to know.

Of course, in order to make sense of human behaviour, we would have to visualise it in the context of an animal interacting within its physical and social environment. Brain activity would be meaningless otherwise. Now, imagine if this can be applied to an artificial intelligence. There is little vision of ever achieving this kind of computational power in practice but there will be potential for it in theory. In order to achieve this and surpass theory then we would need to surpass one of few barriers. These would include certain capabilities: the ability to physically scan brains in order to obtain the necessary information, the ability to interpret the scanned data in order to build a software model and the ability to simulate it into a larger model.  If all ends well and all these capabilities are surpassed that would lead to a successful transfer of brain into what once was a vessel but now becomes an artificial intelligence. Ray Kurzweil stated that a discovered trend highlighted machines becoming more and more biological, and once new technology is created, these steps to evolution are initially changing the evolution in ways of generating and steering ideas into the right direction. ir9 With WBI, countless possibilities are open to interpretation and questions and discoveries. One of which might include what next? We now have an artificial vessel that contains what now is immortal information and knowledge. Since humanity is expanding with the current knowledge available to access, surely this vessel would be able to invent and create new and useful assets to current issues such as cancer or incurable diseases we know of. With all the anti-ageing creams and adverts surely humanity wants more than just a life. With every year you advance into the closer you are to the unescapable concept of the grim reaper.  What if this possibility can be avoided? AI in healthcare and medicine can organise patient routes during hospital visits and also provide the knowledge of multiple physicians with literally all the information they need to make a good decision and to follow up on that very decision. The most obvious application of AI in healthcare is data management or maybe even performing surgery in a medical environment within a hospital, possibly even drawing blood in a GP clinic.  Collecting, storing it, normalizing the data all while, tracing its lineage – it is one of the many first steps in reforming the existing limiting healthcare systems we have currently. Recently, the AI research branch of the search giant, Google, launched its Google Deepmind Health project, which is used to mine the data of medical records in order to provide better and faster health services. The project is in its initial phase, and at present, they are cooperating with the Moorfields Eye Hospital NHS Foundation Trust to improve eye treatment. As we can see the introduction of AI has already been correlated into the medical industry and therefore we can predict more to come. Recently on the BBC news regarding technology, there have been tests with an origami inspired little robot. ir10 It overcomes its size by having the abilities to perform surgery perfectly to the letter. Thanks to progresses in medicine, hundreds of millions of people in the developed world enjoy a standard of living today that was once very rare and favourable depending on status or hierarchy. Something only a few dared to dream of in the former years, with moderately excellent healthcare, nutrition and durability. However, the forever hanging question about AI or robots going rogue still poses as a risk assessment. Can we really trust robotics with fateful or life changing decisions? The transition from human-level AI to superintelligence seems unescapable; and could be very rapid in terms of advancement as far as we know. If there is a sudden intelligence spikes up, thanks to algorithmic self-improvement, then the resulting system or systems are going to likely be very strong. How they behave, whether they be friendly or hostile, whether they will be predictable or enigmatic, whether conscious, capable of empathy or woe, all depend on the underlying architecture and the organisation and the reward function they covertly or explicitly implement. The more unpredictable the more dangerous in terms of handling normal human interaction that cause a threatening outcome that may be irreversible.

 

Conclusion:

 

Technology infuses modern life in a developed world. Most of our basic infrastructure depends on it, from energy to finance to politics, from transport to communications. Of course, all these things existed before computers were even created. But in each of these areas, computers have helped reduce costs and improve efficiency while sustaining new functionality and enabling a better capacity. Human communication, in particular has been transformed, via the internet, by smartphones and by social networking. Centuries of advancing technologies have played a huge part in benefiting mankind as we know it. As a consequence of endorsements and investments in agriculture, medicine and education millions upon millions in what now are MEDC countries enjoy the pros of standard of living and quality of life and experiencing life changing events in whatever aspect. We are granted labour based devices that relieve the burden of necessary daily chores such as cooking, washing and cleaning. We now have plentiful leisure time and ways of enjoying it. Nevertheless humanity still faces many inevitable hitches that come under global challenges such as: climate change, limited supply of fossil fuels affecting crude oil production, global poverty and diseases that have yet to be cured such as cancer and dementia.

The best hope and chance we have of actually taking action to tackle these problems is surely through the means of scientific and technological advances, and the best method to accelerate the path of science and technology is to recruit and train brilliant minds which relate to almost a loop in regards to improving education. So the arrival of human-level artificial intelligence, perhaps with a pattern of intellectual strengths and weaknesses that can complement human intelligence, should lead to more rapid progress and therefore an intelligence explosion may actually occur as a result leading to the prophecy being a reality. If commenters such as Ray Kurzweil are correct, machine superintelligence could bring about an era of change and unpredictably situations in which poverty and disease are hopefully permanently eliminated. Juxtaposing this utopian world however is the aftershock.