Connect with us

Thought Leaders

How we can use Deep Learning with Small Data? – Thought Leaders

mm

Published

 on

When it comes to keeping up with emerging cybersecurity trends, the process of staying on top of any recent developments can get quite tedious since there’s a lot of news to keep up with. These days, however, the situation has changed dramatically, since the cybersecurity realms seem to be revolving around two words- deep learning.

Although we were initially taken aback by the massive coverage that deep learning was receiving, it quickly became apparent that the buzz generated by deep learning was well-earned. In a fashion similar to the human brain, deep learning enables an AI model to achieve highly accurate results, by performing tasks directly from the text, images, and audio cues.

Up till this point, it was widely believed that deep learning relies on a huge set of data, quite similar to the magnitude of data housed by Silicon Valley giants Google and Facebook to meet the aim of solving the most complicated problems within an organization. Contrary to popular belief, however, enterprises can harness the power of deep learning, even with access to a limited data pool.

In an attempt to aid our readers with the necessary knowledge to equip their organization with deep learning, we’ve compiled an article that dives deep (no pun intended) into some of the ways in which enterprises can utilize the benefits of deep learning in spite of having access to limited, or ‘small’ data.

But before we can get into the meat of the article, we’d like to make a small, but highly essential suggestion- start simple. However, before you start formulating neural networks complex enough to feature in a sci-fi movie, start by experimenting with a few simple and conventional models, (e.g. random forest) to get the hang of the software.

With that out of the way, let’s get straight into some of the ways in which enterprises can amalgamate the deep learning technology while having access to limited data.

#1- Fine-lining the baseline model:

As we’ve already mentioned above, the first step that enterprises need to take after they’ve formulated a simple baseline deep learning model is to fine-tune them for the particular problem at hand.

However, fine-tuning a baseline model sounds much difficult on paper, then it actually is. The fundamental idea behind fine-tuning a large data set to cater to the specific needs of an enterprise is simple- you take a large data, that bears some resemblance to the domain you function in, and then fine-tune the details of the original data set, with your limited data.

As far as obtaining the large data set is concerned, enterprise owners can rely on ImageNet, which subsequently also provides an easy to fix to any problems of image classification as well. The dataset hosted by ImageNet allows organizations access to millions of images, which are divided across multiple classes of images, which can be useful to enterprises hailing from a wide variety of domains, including, but certainly not limited to images of animals, etc.

If the process of fine-tuning a pre-trained model to suit the specific needs of your organization still seems like too much work for you, we’d recommend getting help from the internet, since a simple Google search will provide you with hundreds of tutorials on how to fine-tune a dataset.

#2- Collect more data:

Although the second point on our list might seem redundant to some of our more cynical readers, the fact of the matter remains- when it comes to deep learning, the larger your data set is, the more likely you are to achieve more accurate results.

Although the very essence of this article lies in providing enterprises with a limited data set, we’ve often had the displeasure of encountering too many “higher-ups,” who treat investing in the collection of data equivalent to committing a cardinal sin.

It is all too often that businesses tend to overlook the benefits offered by deep learning, simply because they are reluctant to invest time and effort in the gathering of data. If your enterprise is unsure about the amount of data that needs to be collected, we’d suggest to plot learning curves, as the additional data is integrated into the model, and observe the change in model performance.

Contrary to the popular belief held by most CSO’s and CISO’s, sometimes the best way to solve problems is through the collection of more relevant, data. The role of CSO and CISO is extremely important in this case because there is always a threat of cyber-attacks. It is found that in 2019, the total global spending on cybersecurity takes up to $103.1 billion, and the number continues to rise. To put this into perspective, let’s consider a simple example- imagine that you were trying to classify rare diamonds, but have access to a very limited data set. As the most obvious solution to the problem dictates, instead of having a field day with the baseline model, just collect more data!

#3- Data Augmentation:

Although the first two points we’ve discussed above are both highly efficient in providing an easy solution to most problems surrounding the implementation of deep learning into enterprises with a small data set, they rely heavily on a certain level of luck to get the job done.

If you’re unable to have any success with fine-tuning a pre-existing data set either, we’d recommend trying data augmentation. The way that data augmentation is simple. Through the process of data augmentation, the input data set is altered, or augmented, in such a way that it gives a new output, without actually changing the label value.

To put the idea of data augmentation into perspective for our readers, let’s consider a picture of a dog. When rotated, the viewer of the image will still be able to tell that it’s an image of a dog. This is exactly what good data augmentation hopes to achieve, as compared to a rotated image of a road, which changes the angle of elevation and leaves plenty of space for the deep learning algorithm to come to an incorrect conclusion, and defeats the purpose of implementing deep-learning in the first place.

When it comes to solving problems related to image classification, data augmentation serves as a key player in the field and hosts a variety of data augmentation techniques that help the deep learning model to gain an in-depth understanding of the different classifications of images.

Moreover, when it comes to augmenting data- the possibilities are virtually endless. Enterprises can implement data augmentation in a variety of ways, which include NLP, and experimentation of GANs, which enables the algorithm to generate new data.

#4- Implementing an ensemble effect:

The technology behind deep learning dictates that the network is built upon multiple layers. However, contrary to popular belief maintained by many, rather than viewing each layer as an “ever-increasing” hierarchy of features, the final layer serves the purpose of offering an ensemble mechanism.

The belief that enterprises with access to a limited, or smaller data set should opt to build their networks deep was also shared in a NIPs paper, which mirrored the belief we’ve expressed above. Enterprises with small data can easily manipulate the ensemble effect to their advantage, simply by building their deep learning networks deep, through fine-tuning or some other alternative.

#5- Incorporating autoencoders:

Although the fifth point we’ve taken into consideration for has received only a relative level of success- we’re still on board with the use of autoencoders in order to pre-train a network and initialize the network properly.

One of the biggest reasons apart from cyber-attacks as to why enterprises fail to get over the initial hurdles of integrating deep learning is because of bad initialization, and it’s many pitfalls. Unsupervised pre-training often leads to poor, or incorrect execution of the deep learning technology, which is where autoencoders can shine.

The fundamental notion behind a neural network dictates the creation of a neural network that predicts the nature of the dataset being input. If you are unsure of how to use an autoencoder, there are several tutorials online that give clear cut instructions.

To conclude:

At the end of the article, we’d like to reimburse what we’ve said throughout the article, with one addition- incorporating domain-specific knowledge into the learning process! Not only does the incorporation of valuable insight to speed up the learning process, but it also allows for the deep learning technology to produce better, and more accurate results.

Spread the love

Rebecca is an enthusiastic cybersecurity journalist, A creative team leader, and editor of PrivacyCrypts.

Data Science

Three Uses Of Automation Within Supply Chain 4.0

mm

Published

on

The increased availability of advanced technologies has revolutionized the traditional supply chain model. Supply Chain 4.0 responds to modern customer expectations by relying heavily on the Internet of Things (IoT), advanced robotics, big data analytics, and blockchain. These tools enable automation and thus give organizations a chance to close information gaps and optimally match supply and demand.

“The reorganization of supply chains […] is transforming the model of supply chain management from a linear one, in which instructions flow from supplier to producer to distributor to consumer, and back, to a more integrated model in which information flows in an omnidirectional manner to the supply chain.” – Understanding Supply Chain 4.0 and its potential impact on global value chains

Industry giants like Netflix, Tesla, UPS, Amazon, and Microsoft rely heavily on automation within their supply chain to lead their respective industries. Let us take a closer look at three powerful automation use cases.

Three Uses Of Automation Within Supply Chain 4.0:

1. Managing demand uncertainty

A painful aspect of supply chain ecosystems is the demand uncertainty and the inability to accurately forecast demand. Generally, this leads to a set of performance issues, from increased operational cost to excess inventory and suboptimal production capacity. Automation tools can forecast demand, remove uncertainty from the equation, and thus improve operational efficiency at each step along the supply chain.

Big data analytics is an established tool that helps organizations manage demand uncertainty. It consists of data collection & aggregation infrastructure combined with powerful ML algorithms, designed to forecast demand based on historical (or even real-time) data. Modern storage solutions (such as data lakes) make it possible to aggregate data from a variety of sources: market trends, competitor information, and consumer preferences. 

Machine learning(ML) algorithms continually analyze this rich data to find new patterns, improve the accuracy of demand forecasting, and enhance operational efficiency. This is the recipe that Amazon uses to predict demand for a product before it is purchased and stocked in their warehouse. By examining tweets and posts on websites and social media, they understand customer sentiments about products and have a data-based way to model demand uncertainty. 

The good news is that such powerful analytics tools are not restricted to industry giants anymore. Out-of-the-box solutions (such as Amazon Forecast) make such capabilities widely available to all organizations that wish to handle demand uncertainty. 

2. Managing process uncertainties

Organizations operating in today’s supply chain industry need to handle increasingly complex logistic processes. The competitive environment, together with ever-increasing customer expectations make it imperative to minimize uncertainties across all areas of supply chain management. 

From production and inventory, to order management, packing, and shipping of goods, automation tools can tackle uncertainties and minimize process flaws. AI, robotics, and IoT are well-known methods that facilitate an optimal flow of resources, minimize delays, and promote optimized production schedules.

Internet of Things (IoT) is playing an important role to overcome process uncertainties in the supply chain. One major IoT application is the accurate tracking of goods and assets. IoT sensors are used for tracking in the warehouse, during loading, in-transit, and unloading phases. This enables applications such as live monitoring, which increases process visibility and enables managers to act on real-time information. It also makes it possible to further optimize a variety of other processes, from loading operations to payment collection.

Supply Chain management and automation

IoT increases process visibility and enables managers to act on real-time information. Source: Canva

Since 2012, Amazon fulfillment warehouses use AI-powered robots that are doing real magic. One can see robots and humans working side by side through wireless communication, handling orders that are unique in size, shape, and weight. Thousands of Wi-Fi connected robots gather merchandise for each individual order. These robots have two powered wheels that let them rotate in place, IR for obstacle detection, and built-in cameras to read QR codes on the ground. Robots use these QR codes to determine their location and direction. Like this, efficiency is increased, the physical activity of employees is reduced and process uncertainty is kept to a minimum.

Another example of how automation helps make process improvements comes from vehicle transport company CFR Rinkens. They have utilized automation in their accounting and billing departments to quicken payment processing times. Through auto-created invoices, they have decreased costs and errors which in turn reduces delays.

“An area of need that we applied automation was within the accounting department for billing and paying vendors. With tons of invoices coming in and out, automation here ensures nothing falls through the cracks, and clients receive invoices on time providing them with enough time to process payment.”   -Joseph Giranda, CFR Rinkens

The biggest benefit of automation is transparency. Each step of an organized supply chain eliminates grey areas for both clients and businesses. 

3. Synchronization among supply chain partners and customers

Digital supply chains are characterized by synchronization among hundreds of departments, vendors, suppliers, and customers. In order to orchestrate activities all the way from planning to execution, supply chains require information to be collected, analyzed, and utilized in real-time. A sure way to achieve a fully synchronized supply chain is to leverage the power of automation. 

CFR Rinkens uses a dynamic dashboard to keep track of cargo as they deliver vehicles across the world. This dashboard is automatically updated with relevant information that increases transparency and efficiency. High transparency allows for excellent customer service and satisfaction. 

“Upon a vehicle’s arrival, images are taken and uploaded onto a CFR dashboard that our clients are able to access. All vehicle documents, images, and movements are automatically displayed within this dashboard. This automation helps on the customer service side because it allows for full transparency and accountability for quality control, delivery window times, and real-time visibility.”   -Joseph Giranda, CFR Rinkens

Automation offers an effective solution to the synchronization issue with blockchain. Blockchain is a distributive digital ledger with many applications and can be used for any exchange, tracking, or payment. Blockchain allows information to be instantly visible to all supply chain partners and enables a multitude of applications. Documents, transactions, and goods can easily be tracked. Payments and pricing can also be historically recorded, all in a secure and transparent manner.

Supply Chain management and automation 2

Digital Supply Chains increase transparency and efficiency. Source: Canva

The shipping giant FedEx has joined Blockchain in Transport Alliance (BiTA) and launched a blockchain-powered pilot program to help solve customer disputes. Similarly, UPS has also joined BiTA as early as 2017, reaching for increased transparency and efficiency among its entire partner network. Such real-life use cases show the potential of blockchain technology and the impact that automation can have on the entire freight industry.

Blockchain increases the transparency of the supply chain and removes information latency for all partners on the network. The resulting benefits include increased productivity and operational efficiency as well as better service levels. Its massive potential makes blockchain a top priority for supply chain organizations and their digital automation journey.

Conclusion

Automation is playing a major role in defining the Supply Chain 4.0 environment. With heavy technological tools available to them, leading organizations are taking serious leaps towards efficiency and productivity. Automation gives them the power to accelerate and optimize the whole end-to-end supply chain journey. It also enables them to use data to their advantage and close information gaps across their network. 

Where To Go From Here?

Data can be the obstacle or the solution to all these potential benefits. Fortunately, experts-for-hire on this are easy to reach. Blue Orange Digital, a top-ranked AI development agency in NYC, specializes in cloud data storage solutions and facilitates the development of supply chain optimization. They provide custom solutions to meet each unique business needs, but also have many pre-built options for supply chain leaders. From a technology point of view, we have outlined several different ways to improve the efficiency of the supply chain. Taken together, these improvements give you Supply Chain 4.0.

All images source: Canva

Spread the love
Continue Reading

AI 101

What is the Turing Test and Why Does it Matter?

mm

Published

on

If you’ve been around Artificial Intelligence (AI) you have undoubtedly heard of ‘The Turing Test‘.  This was a test first proposed by Alan Turing in 1950, the test was designed to be the ultimate experiment on whether or not an AI has achieved human level intelligence. Conceptually, if the AI is able to pass the test, it has achieved intelligence that is equivalent to, or indistinguishable from that of a human.

We will explore who Alan Turing is, what the test is, why it matters, and why the definition of the test may need to evolve.

Who is Alan Turing?

Turing is an eccentric British Mathematician who is recognized for his futurist ground breaking ideas.

In 1935, at the age of 22 his work on probability theory won him a Fellowship of King’s College, University of Cambridge. His abstract mathematical ideas served to push him in a completely different direction in a field that was yet to be invented.

In 1936, Turing published a paper that is now recognized as the foundation of computer science. This is where he invented the concept of a ‘Universal Machine’ that could decode and perform any set of instructions.

In 1939, Turing was recruited by the British government’s code-breaking department. At the time Germany was using what is called an ‘enigma machine‘ to encipher all its military and naval signals. Turing rapidly developed a new machine (the ‘Bombe’) which was capable of breaking Enigma messages on an industrial scale. This development has been deemed as instrumental in assisting in pushing back the aggression’s of Nazi Germany.

In 1946, Turing returned to working on his revolutionary idea published in 1936 to develop an electronic computer, capable of running various types of computations. He produced a detailed design for what was was called the Automatic Computing Engine (ACE.)

In 1950, Turing published his seminal work asking if a “Machine Can Think?“.  This paper completely transformed both computer science and AI.

In 1952, after being reported to the police by a young man, Turing was convicted of gross indecency due to his homosexual activities.  Due to this his security clearance for the government was revoked, and his career was destroyed. In order to punish him he was chemically castrated.

With his life shattered he was later discovered in his home by his cleaner on 8 June, 1954. He had died from cyanide poisoning the day before. A partly eaten apple lay next to his body. The coroner’s verdict was suicide.

Fortunately, his legacy continues to live on.

What is the Turing Test?

In 1950, Alan Turing published a seminal paper titled “Computing Machinery and Intelligence” in Mind magazine. In this detailed paper the question “Can Machines Think?” was proposed. The paper suggested abandoning the quest to define if a machine can think, to instead test the machine with the ‘imitation game’. This simple game is played with three people:

  • a man (A)
  • a woman (B),
  • and an interrogator (C) who may be of either sex.

The concept of the game is that the interrogator stays in a room that is separate from both the man (A) and the woman (B), the goal is for the interrogator to identify who the man is, and who the woman is. In this instance the goal of the man (A) is to deceive the interrogator, meanwhile the woman (B) can attempt to help the interrogator (C). To make this fair, no verbal cues can be used, instead only typewritten questions and answers are sent back and forth. The question then becomes: How does the interrogator know who to trust?

The interrogator only knows them by the labels X and Y, and at the end of the game he simply states either ‘X is A and Y is B’ or ‘X is B and Y is A’.

The question then becomes, if we remove the man (A) or the woman (B), and replace that person with an intelligent machine, can the machine use its AI system to trick the interrogator (C) into believing that it’s a man or a woman? This is in essence the nature of the Turing Test.

In other words if you were to communicate with an AI system unknowingly, and you assumed that the ‘entity’ on the other end was a human, could the AI deceive you indefinitely?

Why the Turing Test Matters

In Alan Turing’s paper he alluded to the fact that he believed that the Turing Test could eventually be beat. He states: “by the year 2000 I believe that in about fifty years’ time it will be possible to programme computers, with a storage capacity of about 109, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent, chance of making the right identification after five minutes of questioning.

When looking at the Turing Test through a modern lens it seems very possible that an AI system could trick a human for five minutes. How often have humans interacted with support chatbots not knowing if the chatbot is a human or a bot?

There have been many reports of the Turing Test being passed. In 2014, a chatbot program named Eugene Goostman, which simulates a 13-year-old Ukrainian boy, is said to have passed the Turing test at an event organised by the University of Reading. The chatbot apparently convinced 33% of the judges at the Royal Society in London that it was human. Nonetheless critics were fast to point out the inadequacies of the test, the fact that so many judges were not convinced, the duration of the test (only 5 minutes), as well as the lack of forthcoming evidence for this achievement.

Nonetheless, it an age of Natural Language Processing (NLP), with its subfields of Natural-language understanding (NLU) and natural-language interpretation (NLI), the question needs to be asked, if a machine is asking and answering questions without fully understanding the context behind what it says is the machine truly intelligent?

After all, if you review the technology behind Watson, a computer system capable of answering questions posed in natural language, developed by IBM to defeat Jeopardy champions, it becomes apparent that Watson was able to beat the world champions by accessing all of the world’s knowledge via the internet, without actually understanding the context behind this language. Similar to a search engine, keywords and reference points were made. If an AI can achieve this level of comprehension, then we should consider that based on today’s advancing technology, deceiving a human for 5 or 10 minutes is simply not setting the bar high enough.

Should the Turing Test Evolve?

The Turing Test has done a remarkable job of standing the test of time. Nonetheless, AI has evolved dramatically since 1950. Every time AI achieves a feat of which we claimed only humans were capable of we set the bar higher. It will only be a matter of time until AI is able to consistently pass the Turing Test as we understand it.

When reviewing the history of AI, the ultimate barometer of whether or not AI can achieve human level intelligence is almost always based on if it can defeat humans at various games. In 1949, Claude Shannon published his thoughts on the topic of how a computer might be made to play chess as this was considered the ultimate summit of human intelligence.

It wasn’t until February 10, 1996, after a grueling three hour match that world chess champion Garry Kasparov lost the first game of a six-game match against Deep Blue, an IBM computer capable of evaluating 200 million moves per second. It wasn’t long until Chess was no longer considered the pinnacle of human intelligence. Chess was then replaced with the game of Go, a game which originated in China over 3000 years ago. The bar for AI achieving human level intelligence was moved up.

Fast forward to October 2015, AlphaGo played its first match against the reigning three-time European Champion, Mr Fan Hui. AlphaGo won the first ever game against a Go professional with a score of 5-0. Go is considered to be the most sophisticated game in the world with its 10360 possible moves. All of a sudden the bar was moved up again.

Eventually the argument was that an AI had to be able to defeat teams of players at MMORPG (massively multiplayer online role-playing games). OpenAI quickly rose to the challenge by using deep reinforcement learning.

It is due to this consistent moving of the proverbial bar that we should reconsider a new modern definition of the Turing Test. The current test may rely too much on deception, and the technology that is in a chatbot. Potentially, with the evolution of robotics we may require that for an AI to truly achieve human level intelligence, the AI will need to interact and “live” in our actual world, versus a game environment or a simulated environment with its defined rules.

If instead of deceiving us,  a robot can can interact with us like any other human, by having conversations, proposing ideas and solutions, maybe only then will the Turing Test be passed. The ultimate version of the Turing Test may be when an AI approaches a human, and attempts to convince us that it is self-aware.

At this point, we will also have achieved Artificial General Intelligence (AGI). It would then be inevitable than the AI/robot would rapidly surpass us in intelligence.

Spread the love
Continue Reading

Artificial General Intelligence

Are we Living in an Artificial Intelligence Simulation?

mm

Published

on

The existential question that we should be asking ourselves, is are we living in a simulated universe?

The idea that we are living in a simulated reality may seem unconventional and irrational to the general public, but it is a belief shared by many of the brightest minds of our time including Neil deGrasse Tyson, Ray Kurzweil and Elon Musk. Elon Musk famously asked the question ‘What’s outside the simulation?’ in a podcast with Lex Fridman a research scientist at MIT.

To understand how we could be living in a simulation, one needs to explore the simulation hypothesis or simulation theory which proposes that all of reality, including the Earth and the universe, is in fact an artificial simulation.

While the idea dates back as far as the 17th-century and was initially proposed by philosopher René Descartes, the idea started to gain mainstream interest when Professor Nick Bostrom of Oxford University, wrote a seminal paper in 2003 titled “Are you Living in a Computer Simulation?”

Nick Bostrom has since doubled down on his claims and uses probabilistic analysis to prove his point. There are many interviews where he details his views in detail including this talk at Google headquarters.

We will explore the concept of how a simulation can be created, who would create it, and why anyone would create it.

How a Simulation Would be Created

If you analyze the history of video games, there is a clear innovation curve in the quality of games. In 1982 Atari Inc released Pong, players could compete by playing a tennis style game featuring simple two-dimensional graphics.

Video games quickly evolved. The 80s featured 2D graphics, the 90s featured 3D graphics, and since then we have been introduced to Virtual Reality (VR).

The accelerated rate of progress when it comes to VR cannot be understated. Initially VR suffered from many issues including giving users headaches, eye strain, dizziness, and nausea. While some of these issues still exist, VR now offers immersive educational, gaming, and travel experiences.

It is not difficult to extrapolate that based on the current rate of progress in 50 years, or even 500 years, VR will become indistinguishable from reality. A gamer could immerse themselves in a simulated setting and may at some point find it difficult to distinguish reality from fiction. The gamer/user could become so immersed in the fictional reality, that they do not realize that they are simply a character in a simulation.

Who Would Create the Simulation?

How we create a simulation can be extrapolated based on exponential technological advances as described by ‘The Law of Accelerating Returns‘. Meanwhile, who would create these simulations is a challenging puzzle. There are many different scenarios that have been proposed, all are equally valid as there is no current way of testing or validating these theories.

Nick Bostrom has proposed an advanced civilization may choose to run “ancestor simulations”.  These are essentially simulations that are indistinguishable from reality, with the goal of simulating human ancestors. The number of simulated realities could run into infinity. This is not a far stretch once you consider that the entire purpose of Deep Reinforcement Learning is to train an Artificial Neural Network to improve itself in a simulated setting.

If we analyze this from a purely AI point of view, we could be simulating different realities to discover the truth about a series of events. You could create a simulation where North Korea is divided from South Korea, and one simulation where both Koreas remain unified. Each small change in a simulation, could have long-term implications.

Other theories abound, that the simulations are created by advanced AI or even an alien species. The truth is completely unknown, but it is interesting to speculate on who would be running such simulations.

How it Works

There are multiple arguments about how a simulated universe would work. Would the entire history of planet earth, all 4.5 billion years be simulated? Or would the simulation simply begin at an undefined starting point such as the year AD 1? This would implicate that to save computing resources the simulation would simply create archaeological, and geological history for us to study. Then again, a random starting point may defeat the purpose of a simulation which may be designed to learn the nature of evolutionary forces, and how lifeforms react to cataclysmic events such as the five major extinctions, including the one which wiped out the dinosaurs 65 millions years ago.

A more likely scenario, is that the simulation would simply begin when the first modern humans began moving outside of Africa starting 70,000 to 100,000 years ago. The human (simulated) perception of time differs from the time experienced in a computer, especially when you factor in quantum computing.

A quantum computer would enable time to be non-linear,  we could experience the perception of time, without the actual passage of time. Even without the power of quantum computing, OpenAI successfully used large scale deep reinforcement learning to enable a robotic hand to teach itself to manipulate a rubik’s cube. It was able to solve the rubik’s cube by practicing for the equivalent of 13,000 years inside a computer simulation.

Why People Believe

When you consider the wide spectrum of those who believe or acknowledge that there is a probability that we live in a simulation, a common denominator is present. Believers have a deep belief in science, in technological progress, in exponential thinking, and most of them are super successful.

If you are Elon Musk, what is more likely, that out of 7.7 billion he is the first person taking humans to Mars, or are the odds higher that he is living in a simulation? This might be why Elon Musk has openly stated that “There’s a billion to one chance we’re living in base reality.”

One of the more compelling arguments is from George Hotz the enigmatic hacker and founder of autonomous vehicle technology startup Comma.ai. His engaging presentation at the popular SXSW 2019 conference had attendees believing for an hour that they were living inside a simulation. What we can conclude with certainty, is that we should keep an open mind.

 

Spread the love
Continue Reading