Connect with us

AI News

AI May Soon Determine the Perfect Room Temperature

mm

Published

 on

Artificial intelligence (AI) harnesses the power of modern computers to simulate human intelligence, so to improve a wide variety of devices and processes. One of these devices it could end up improving is the thermostat and the process of setting the right room temperature for a set of people who have a range of temperature preferences.

Setting just the right room temperature for multiple people is a complex process for humans. This is because proper temperature is largely subjective. While almost no one enjoys temperature below freezing or boiling hot, some people like it a little cool while others like it a little hot. Still others like it somewhere in between. It is also not uncommon for arguments to start over the temperature, especially between couples and families living together.

But researchers at Purdue University are using AI to try to change this situation by developing a new framework for building intelligent heating, ventilating and air conditioning (HVAC) systems.

The researchers have described this framework in a paper they have just published called “Learning Personalized Thermal Preferences via Bayesian Active Learning with Unimodality Constraints.” This paper describes what is known as a recommendation system, which is a kind of AI model that tries to determine what people both like and dislike. In this case, the model tries to determine what people like in terms of temperature.

The paper’s objective was to develop a framework that could be used for creating real HVAC systems that would customize the maintenance of temperature in a room based upon the preferences of the occupants. It would accomplish this by asking the occupants a series of questions in an intelligent way, in order to determine what temperature would provide the optimal comfort for all occupants. It must ask these questions because there are no other reliable ways of determining the proper temperature setting.

The system would work by asking each occupant of a building, every half-hour, how satisfied they are with the current temperature. Occupants could respond in one of 3 ways. They could either indicate that they are happy with the current temperature or they could indicate that they would like the temperature to be either warmer or colder. Based on the results of these questions, the system would either maintain the current temperature or raise or lower it in increments, with the system refining the increments with each successive round of questioning.

While completing the study, the researchers first created 3 simulated building occupants. Each of these simulated occupants worked in separate offices and preferred temperatures that ranged from 22.1-25 degrees Celsius (71.78-77 degrees Fahrenheit). The researchers then fed responses from the simulated occupants into the AI model, which, after 6 rounds of questioning, was able to find a temperature that worked well for all 3 occupants.

In the next stage of their study, the researchers tested their AI model in a real-world situation. They did this by going to a private office building in West Lafayette, Indiana and conducting a test with 6 people who worked there. Over a number of days, each of these people visited a room initially set to 21 degrees Celsius (69.8 degrees Fahrenheit). The system then asked them questions every 30 minutes relating to how satisfied they were with the temperature. Their responses were afterward used to determine a new range of temperatures.

During the course of the test, each of the 6 people in the study were asked between 5 and 10 questions. Eventually, the system was able to determine a two-degree range of temperatures that would satisfy all 6 people with a 95% certainty.

The researchers think that an actual HVAC system based on their framework could do more than just improve the comfort of building occupants. They believe that it could also lead to a reduction of energy use. They further believe that such an HVAC system would be simple to use and would not cost much to operate.

Spread the love

A science fiction nerd a heart who grew up reading everything written by Robert A. Heinlein and Isaac Asimov, Alan loves to report on the future and AI.

AI News

Big Developments Bring Us Closer to Fully Untethered Soft Robots

mm

Published

on

Researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) and Caltech have developed new soft robotic systems that are inspired by origami. These new systems are able to move and change shape in response to external stimuli. The new developments bring us closer to having fully untethered soft robots. The soft robots that we possess today use external power and control. Because of this, they have to be tethered to off-board systems with hard components. 

The research was published in Science Robotics. Jennifer A. Lewis, a Hansjorg Wyss Professor of Biologically Inspired Engineering at SEAS and co-lead author of the study, spoke about the new developments. 

“The ability to integrate active materials within 3D-printed objects enables the design and fabrication of entirely new classes of soft robotic matter,” she said. 

The researchers used origami as a model to create multifunctional soft robots. Origami, through sequential folds, is able to change into multiple shapes and functionalities while staying in a single structure. The research team used liquid crystal elastomers that are able to change their shape when exposed to heat. The team utilized 3D-printing to get two types of soft hinges. Those hinges fold depending on the temperature, and they can be programmed to fold into a specific order. 

Arda Kotikan is a graduate student at SEAS and the Graduate School of Arts and Sciences and the co-first author of the paper. 

“With our method of 3D printing active hinges, we have full programmability over temperature response, the amount of torque the hinges can exert, their bending angle, and fold orientation. Our fabrication method facilitates integrating these active components with other materials,” she said. 

Connor McMahan is a graduate student at Caltech and co-first author of the paper as well. 

“Using hinges makes it easier to program robotic functions and control how a robot will change shape. Instead of having the entire body of a soft robot deform in ways that can be difficult to predict, you only need to program how a few small regions of your structure will respond to changes in temperature,” he said.

The team of researchers built multiple soft devices. One of these devices was an untethered soft robot called “Rollbot.” It starts as an 8 centimeters long and 4 centimeters wide flat sheet. When it is in contact with a hot surface of around 200°C, one set of the hinges folds and shapes the robot into a pentagonal wheel. 

On each of the five sides of the wheel, there are more sets of hinges that fold when in contact with a hot surface. 

“Many existing soft robots require a tether to external power and control systems or are limited by the amount of force they can exert. These active hinges are useful because they allow soft robots to operate in environments where tethers are impractical and to lift objects many times heavier than the hinges,” said McMahan.

This research that was conducted focused solely on temperature responses. In the future, the liquid crystal elastomers will be studied further as they are also able to respond to light, pH, humidity, and other external stimuli. 

“This work demonstrates how the combination of responsive polymers in an architected composite can lead to materials with self-actuation in response to different stimuli. In the future, such materials can be programmed to perform ever more complex tasks, blurring the boundaries between materials and robots,” said Chiara Daraio, Professor of Mechanical Engineering and Applied Physics at Caltech and co-lead author of the study.

The research included co-authors Emily C. Davidson, Jalilah M. Muhammad, and Robert D. Weeks. The work was supported by the Army Research Office, Harvard Materials Research Science and Engineering Center through the National Science Foundation, and the NASA Space Technology Research Fellowship. 

 

Spread the love
Continue Reading

AI News

Modeling Artificial Neural Networks (ANNs) On Animal Brains

mm

Published

on

Cold Spring Harbor Laboratory (CSHL) neuroscientist Anthony Zador has shown that evolution and animal brains can be used as inspiration for machine learning. It can be beneficial in helping AI solve many different problems. 

According to CSHL neuroscientist Anthony Zador, Artificial Intelligence (AI) can be greatly improved by looking to animal brains. WIth this approach, neuroscientists and those working in the AI field have a new way of solving some of AI’s most pressing problems. 

Anthony Zador, M.D., Ph.D., has dedicated much of his career to explaining the complex neural networks within the living brain. He goes all the way down to the individual neuron. In the beginning of his career, he focused on something different. He studied artificial neural networks (ANNs). ANNs are computing systems that have been the basis of much of our developments in the AI secor. They are modeled after the networks in both animal and human brains. Until now, this is where the concept stopped. 

A recent perspective piece, authored by Zador, was published in Nature Communications. In that piece, Zador detailed how new and improved learning algorithms are helping AI systems develop to a point where they greatly outperform humans. This happens in a variety of tasks, problems, and games like chess and poker. Even though some of these computers are able to perform so well in a variety of complex problems, they are often confused by things us humans would consider simple. 

If those working in this field were able to solve this problem, robots could reach a point in development where they could learn to do extremely natural and organic things such as stalking prey or building a nest. They could even do something like washing the dishes, which has proven to be extremely difficult for robots. 

“The things that we find hard, like abstract thought or chess-playing, are actually not the hard thing for machines. The things that we find easy, like interacting with the physical world, that’s what’s hard,” Zador explained. “The reason that we think it’s easy is that we had half a billion years of evolution that has wired up our circuits so that we do it effortlessly.”

Zador thinks that if we want robots to achieve quick learning, something that would change everything in the sector, we might not want to only look at a perfected general learning algorithm. What scientists and others should do is look towards biological neural networks that have been given to us through nature and evolution. These could be used as a base to build on for quick and easy learning of specific types of tasks, tasks that are important for survival. 

Zador talks about what we can learn from squirrels living in our own backyards if we just looked at genetics, neural networks, and genetic predisposition.

“You have squirrels that can jump from tree to tree within a few weeks after birth, but we don’t have mice learning the same thing. Why not?” Zador said. “It’s because one is genetically predetermined to become a tree-dwelling creature.”

Zador believes that one thing that could come from genetic predisposition is the innate circuitry that is within an animal. It helps that animal and guides its early learning. One of the problems with attaching this to the AI world is that the networks used in machine learning, ones that are pursued by AI experts, are much more generalized than the ones in nature. 

If we are able to get to a point where ANNs reach a point in development where they can be modeled after the things we see in nature, robots could begin to do tasks that at one point were extremely difficult. 

 

Spread the love
Continue Reading

AI News

California Start-Up Cerebras Has Developed World’s Biggest Chip For AI

mm

Published

on

California start-up Cerebras has developed the world’s biggest computer chip to be used to train AI systems. It is set to be revealed after being in development for four years. 

Contrary to the normal progression of chips getting smaller, the new one developed by Cerebras has a surface area bigger than an IPad. It is more than 80 times bigger than any competitors, and it uses a large amount of electricity. 

The new development represents the astounding amount of computing power that is being used in AI. Included in this is the $1bn investment from Microsoft into OpenAI that was announced last month. OpenAI is trying to develop an Artificial General Intelligence (AGI) which will be a giant leap forward, something that will change much of what we know. 

Cerebras is unique in this field because of the enormous size of their chip. Other companies endlessly work to create extremely small chips. Most of our advanced chips today are assembled like this. According to Patrick Moorhead, a US chip analyst, Cerebras basically put an entire computing cluster on a single chip. 

Cerebras is looking to join the likes of other companies like Intel, Habana, Labs, and the UK start-up Graphcore. They are all building a new generation of specialized AI chips. This development in AI chips is reaching its biggest stage yet as the companies are going to start delivering the first chips to customers by the end of the year. Among the companies, Cerebras will be looking to be the go-to for massive computing tasks that are being done by our largest internet companies. 

There are many more companies and start-ups involved in this space including Graphcore, Wave Computing, and the Chinese based start-up Cambricon. They are all looking to develop specialized AI chips used for inference. They want to take a trained AI system and use it in real-world scenarios. 

Normally, it takes a long time for the development process to finish and actual products be shipped to people and companies. According to Linley Group, a US chip research firm, there are a lot of technical issues that are time-consuming. Although it takes awhile for products to be developed, there is still a big interest in these companies. Cerebras has raised over $200m in venture capital. As of late last year, they were valued at about $1.6bn. There is a lot of projected growth for the global revenue of these deep learning chipsets. 

The reason that these companies are focusing on this type of processor for AI is because of the huge amounts of data that are needed in order to train neural networks. Those neural networks are then used in deep-learning systems and are responsible for things such as image recognition. 

The chip from Cerebras is a single chip made out of a 300mm diameter circular wafer. It is the largest silicon disc to be made in the current chip factories. The norm is for these wafers to be split up into many individual chips instead of one giant one. Anyone who tried before ran into issues with putting circuitry into something so big. Cerebras got past this by connecting the different sectors on the wafers. Once this is done, they are able to communicate with each other and become a big processor. 

Cerebras is looking forward and will try to link cores in a matrix pattern to be able to communicate with each other. They want to connect 400,000 cores while keeping all of the processing on one single chip. 

It will be exciting to see these developments move forward with Cerebras and other companies continuing to advance our AI systems. 

 

Spread the love
Continue Reading