Connect with us


First Artificial Intelligence University Established in UAE



First Artificial Intelligence University Established in UAE

The first Artificial Intelligence (AI) focused university has been established in Abu Dhabi, the capital of the United Arab Emirates (UAE). It will be a graduate-level university with a heavy focus on research,  and it is called Mohammed Bin Zayed University of Artificial Intelligence (MBZUAI).

According to the Crown Prince of Abu Dhabi, “Launching the world’s first graduate-level AI university in Abu Dhabi echoes the UAE’s pioneering spirit, and paves the way for a new era of innovation and technological advancement that benefits the UAE and the world.” 

The establishment of MBZUAI was announced at a press conference at the campus of the university, located in a suburb of the capital called Masdar City. 

“The Mohamed bin Zayed University of Artificial Intelligence (MBZUAI) is an open invitation from Abu Dhabi to the world to unleash AI’s full potential,” UAE’s Minister of State Sultan Ahmed Al Jaber said. 

State Sultan Ahmed Al Jaber was appointed to be chair of the board of trustees at the university. 

New Era of Technological Advancement

The university will have some of the best facilities and equipment related to AI. There will be both masters programs lasting two years as well as PhD ones lasting four years. The university will accept both local and international graduate students, and there will be three main specialty fields including machine learning, computer vision, and natural language processing

Official applications for acceptance into the university will start this month, and registrations will happen next August. The first classes are set to officially begin in September 2020. 

Dr. Sultan Al Jaber spoke at the official press conference opening. 

“The world has entered a new era of technological advancement and rapid innovation, all driven and underpinned by AI. This new era will pave the way for unprecedented opportunities, AI has become a priority and evident across all industries with new technologies being introduced at an incredibly fast pace,” he said.

“The world needs more human capacity in the field of AI to bridge any possible gaps and that is why today the UAE and Abu Dhabi is announcing the launch of Mohammad Bin Zayed University of Artificial Intelligence – the world’s first graduate level research-based AI university,” he added.

“This university will help us to develop the necessary AI eco system that will enable us to leverage the full potential of this very important technology locally, regionally and globally. The university will create an active AI community in the UAE developing innovative applications for businesses and government,” he said. 

A Team With Extensive Background in AI

The university is joined by Sir Michael Brady, who is the interim president and a member of the board of trustees. He has an extensive background in Artificial Intelligence, robotics, and imagery. He has spoken about the university and said that it was part of a move towards a knowledge-based economy, something that is taking place within the UAE. 

“This began with the government of the UAE formulating the strategy to transform the economy to the post oil era… To invest in developing competence in renewable energy, financial services, healthcare, materials technology and others,” he said. 

“[One of the main] enabling technologies is AI, and then you ask what are the risks in realising that [vision]… and the answer Is people…[So] how are we going to produce the right number of people with the right mindset [and] the right knowledge in order to lead and provide the technical leadership in these areas. That is what this university is about – providing that person power over the next 5 to 10 to 20 years,” he added. 

The establishment of the first ever university dedicated to Artificial Intelligence is a huge step forward in the field. It will likely cause more to open, and it will further dedicate resources and people to the most important development in modern history. 


Spread the love

Alex McFarland is a historian and journalist covering the newest developments in artificial intelligence.


Shell Begins to Reskill Workers in Artificial Intelligence



Shell Begins to Reskill Workers in Artificial Intelligence

Royal Dutch Shell, one of the major oil and gas corporations in the world, is digitally training its workers in artificial intelligence (AI). The company has partnered with Udacity, an educational organization. If successful, the training program could become a model for other companies during a time of drastic and rapid change due to artificial intelligence technology. 

The classes are offered online through Udacity and are meant to help the company increase its AI skills among workers. According to Shell, around 2,000 of its total 82,000 employees around the globe have expressed interest in the AI classes. Many workers have said that their managers are approaching them and asking about courses regarding things like Python programming and training neural networks. The training is all voluntary. 

The pilot program with Udacity launched back in 2019 after Shell needed more AI-skilled workers due to the large number of AI-related projects. The company has relied on AI for many aspects of its operations, from deepwater drilling and maintenance to predictive analysis and autonomous computing. 

After the pilot program proved to be successful, the company expanded it and looked toward petroleum engineers, chemists, data scientists, geophysicists, and more. The online program takes around four to six months to complete, and employees work on it about 10 to 15 hours per week. Shell pays for the customized online coursework, which is called a nanodegree. 

Reasons for AI Workers

There are two major reasons why Shell is in need of AI-skilled workers. First, the company is currently developing alternative sources of energy, and they are set to spend up to $2 billion on new energy technologies. According to Dan Jeavons, general manager of data science at Shell, its power business is “digitally native, and the differentiation is going to be around AI.”

The other reason for the need of AI workers is that Shell is still running its massive oil business. By reskilling workers in AI through the online program, they will be able to find problems within maintenance equipment before it breaks down. The new knowledge will also help them identify areas where there can be reduced carbon emissions. On top of all this, machine learning algorithms could be used to automatically process seismic data and collect information on underground rock formations. 

“The potential to move the needle and help people understand that we’re serious about trying to change the way we do things for the better is not an easy task,” Jeavons says. “But one thing we do know is that technology is a huge element of that change. We need to find a way to provide more and cleaner energy and investing in AI is a key way in which we’re going to do that.”

Shell also hopes that the Udacity AI collaboration will attract younger workers, who view this type of work as dangerous and physically-demanding. 

Paul Donnelly is director of industry marketing at Aspen Technology. The company specializes in complex manufacturing processes. 

“Young people are digital natives,” says Donnelly. “When they come into the workforce, energy and chemical companies are unfortunately competing with Facebook, Amazon, Netflix and Google. It’s tough to compete with those companies.”

Targeting Existing Workforce

One of the biggest challenges for these companies is to transition their current workforce. 

“The worst case scenario is laying people off and then going out and hiring all new workers with the skills you need,” says Gabe Dalporto, CEO of Udacity. “First of all, our universities can’t turn out all the workers we’ll need for the jobs of the future and it’s expensive. The cost of reskilling is so much less.”

According to the company, the Udacity pilot program resulted in an increase in employee satisfaction among workers who completed the coursework. 

“We don’t want people to feel that they’re stagnant and not growing as the company changes,” Jeavons says.

As artificial intelligence becomes more important in many different industries, companies will be forced to adapt. Retraining will be one of the top approaches to dealing with the change. There have been many instances of retraining that did not provide sufficient results, but if Shell’s program is successful, it could be used as a model in the future. 


Spread the love
Continue Reading


Anastassia Loukina, Senior Research Scientist (NLP/Speech) at ETS – Interview Series




Anastassia Loukina, Senior Research Scientist (NLP/Speech) at ETS - Interview Series

Anastassia Loukina  is a research scientist at Educational Testing Services (ETS) where she works on automated scoring of speech.

Her research interests span a wide range of topics. She has worked among other things on Modern Greek dialects, speech rhythm and automated prosody analysis.

Her current work focuses on combining tools and methods from speech technologies and machine learning with insights from studies on speech perception/production in order to build automated scoring models for evaluating non-native speech.

You clearly have a love of languages, what introduced you to this passion?

I grew up speaking Russian in St. Petersburg, Russia and I remember being fascinated when I was first introduced to the English language: for some words, there was a pattern that made it possible to “convert” a Russian word to an English word. And then I would come across a word where “my” pattern failed and try to come up with a better, more general rule. At that time of course, I knew nothing about linguistic typology or the difference between cognates and loan words, but this fueled my curiosity and desire to learn more languages. This passion for identifying patterns in how people speak and testing them on the data is what lead me to phonetics, machine learning and the work I am doing now.

Prior to your current work in Natural Language Processing (NLP) you were a translator between English-Russian and Modern Greek-Russian. Do you believe that your work as a translator has given you additional insights into some of the nuances and problems associated with NLP?

My primary identity has always been that of a researcher. It’s true that I started my academic career as a scholar of Modern Greek, or more specifically, Modern Greek phonetics. For my doctoral work, I explored phonetic differences between several Modern Greek dialects and how the differences between these dialects could be linked to the history of the area. I argued that some of the differences between the dialects could have emerged as a result of the language contact between each dialect and other languages spoken in the area. While I no longer work on Modern Greek, the changes that happen when two languages come in contact with each other is still at the heart of my work: only this time I focus on what happens when an individual is learning a new language and how technology can help do this most efficiently.

When it comes to the English language, there are a myriad of accents. How do you design an NLP with the capability to understand all of the different dialects? Is it a simple matter of feeding the deep learning algorithm additional big data from each type of accent?

There are several approaches that have been used in the past to address this. In addition to building one large model that covers all accents, you could first identify the accent and then use a custom model for this accent, or you can try multiple models at once and pick the one which works best. Ultimately, to achieve a good performance on a wide range of accents you need training and evaluation data representative of the many accents that a system may encounter.

At ETS we conduct comprehensive evaluations to make sure that the scores produced by our automated systems reflect differences in the actual skills we want to measure and are not influenced by the demographic characteristics of the learner such as their gender, race, or country of origin.

Children and/or language learners often have difficulty with perfect pronunciation. How do you overcome the pronunciation problem?

There is no such thing as perfect pronunciation: the way we speak is closely linked to our identity and as developers and researchers our goal is to make sure that our systems are fair to all users.

Both language learners and children present particular challenges for speech-based systems. For example, child voices not only have very different acoustic quality but children also speak differently from adults and there is a lot of variability between children. As a result, developing an automated speech recognition for children is usually a separate task that requires a large amount of child speech data.

Similarly, even though there are many similarities between language learners from the same background, learners can vary widely in their use of phonetic, grammatical and lexical patterns making speech recognition a particularly challenging task. When building our systems for scoring English language proficiency, we use the data from language learners with a wide range of proficiencies and native languages.

In January 2018, you published ‘Using exemplar responses for training and evaluating automated speech scoring systems‘. What are some of the main breakthroughs fundamentals that should be understood from this paper?

In this paper, we looked at how quality of training and testing data affects the performance of automated scoring systems.

Automated scoring systems, like many other automated systems, are trained on data that has been labeled for humans. In this case, these are scores assigned by human raters. Human raters do not always agree in the scores they assign. There are several different strategies used in assessment to ensure that the final score reported to the test-taker remains highly reliable despite variation in human agreement at the level of the individual question. However, since automated scoring engines are usually trained using response-level scores, any inconsistencies in such scores due to the variety of reasons outlined above may negatively affect the system.

We were able to have access to a large amount of data with different agreement between human raters and to compare system performance under different conditions. What we found is that training the system on perfect data doesn’t actually improve its performance over a system trained on the data with more noisy labels. Perfect labels only give you an advantage when your total size of the training set is very low. On the other hand, the quality of human labels had a huge effect on system evaluation: your performance estimates can be up to 30% higher if you evaluated on clean labels.

The takeaway message is that if you have a lot of data and resources to clean your gold-standard labels, it might be smarter to clean the labels in the evaluation set rather than the labels in the training set. And this finding applies not just to automated scoring but to many other areas too.

Could you describe some of your work at ETS?

I work on a speech scoring engine system that process spoken language in an educational context. One such system is SpeechRater®, which uses advanced speech recognition and analysis technology to assess and provide detailed feedback about English language speaking proficiency. SpeechRater is a very mature application that has been around for more than 10 years. I build scoring models for different applications and work with other colleagues across ETS to ensure that our scores are reliable, fair and valid for all test takers. We also work with other groups at ETS to continuously monitor system performance.

In addition to maintaining and improving our operational systems, we prototype new systems. One of the projects I am very excited about is RelayReader™: an application designed to help developing readers gain fluency and confidence. When reading with RelayReader, a user takes turns listening to and reading aloud a book. Their reading is then sent to our servers to provide feedback. In terms of speech processing, the main challenge of this application is how to measure learning and provide actionable and reliable feedback unobtrusively, without interfering with the reader’s engagement with the book.

What’s your favorite part of working with ETS?

What initially attracted me to ETS is that it is a non-profit organization with a mission to advance the quality of education for all people around the world. While of course it is great when research leads to a product, I appreciate having an opportunity to work on projects that are more foundational in nature but will help with product development in the future. I also cherish the fact that ETS takes issues such as data privacy and fairness very seriously and all our systems undergo very stringent assessment before being deployed operationally.

But what truly makes ETS a great place to work is its people. We have an amazing community of scientists, engineers and developers from many different backgrounds which allows for a lot of interesting collaborations.

Do you believe that an AI will ever be able to pass the Turing Test?

Since the 1950s, there have been a lot of interpretation of how the Turing test should be done in practice. There is probably a general agreement that the Turing test hasn’t been passed in a philosophical sense that there is no AI system that thinks like human. However, this has also become a very niche subject. Most people don’t build their systems to pass Turing test – we want them to achieve specific goals.

For some of these tasks, for example, speech recognition or natural language understanding, human performance may be rightly considered the gold standard. But there are also many other tasks where we would expect an automated system to do much better than humans or where an automated system and human expert need to work together to achieve the best result. For example, in an educational context we don’t want an AI system to replace a teacher: we want it to help teachers, whether it is through identifying patterns in student learning trajectories, help with grading or finding the best teaching materials.

Is there anything else that you would like to share about ETS or NLP?

Many people know ETS for its assessments and automated scoring systems. But we do much more than that. We have many capabilities from voice biometrics to spoken dialogue applications and we are always looking for new ways to integrate technology into learning. Now that many students are learning from home, we have opened several of our research capabilities to general public.

Thank you for the interview and for offering this insight on the latest advances in NLP and speech recognition. Anyone who wishes to learn more can visit Educational Testing Services.

Spread the love
Continue Reading


Elnaz Sarraf, CEO and founder of Roybi – Interview Series




Elnaz Sarraf, CEO and founder of Roybi - Interview Series

Can you walk us through your journey, from growing up in Iran, to becoming an entrepreneur?

My childhood and Iranian heritage definitely play an important role in who I am today.  My parents paid a lot of attention to my education at home and in school. My dad was a small business owner and was the face of our company outside of the home, while my mom took care of all the financial and operational aspects of our business at home, because as a woman in Iran, it would not have been acceptable for her to be involved directly in business negotiations.  But the limitations imposed on women didn’t stop my parents from exposing me to every aspect of our business. My dad took me along to many of his meetings; observing the art of negotiating and conducting business deals fascinated me with both the business and social aspects of entrepreneurship.

While at home, I watched my parents manage the company together and discuss the financial elements of holding our business together and finding innovative ways to grow. My summers were always filled with extracurricular classes in the arts, engineering and science. I’m very grateful to my parents who exposed me to a diverse set of social and academic skills at an early age. When I was starting ROYBI, I knew that I have to do a variety of different tasks myself until the company grows. Because of my background in the arts and engineering, I was able to multitask on projects such as industrial design, website designs, coding, and presenting my ideas and vision for the company to investors and partners.


What was it that inspired you to design an AI-powered educational robot?

Our education system needs a fundamental change, and that change starts with early childhood education. It should no longer be a one-size-fits-all approach. Every child has his/her own unique set of skills and our focus needs to be on their individual capabilities. We saw a huge gap in this area and decided to use technology and specifically artificial intelligence to bring about change that can help children, parents, and teachers. We developed Roybi Robot to interact with children as young as 3-years-old because early childhood is the most critical age in a child’s growth and future success. We’re constantly engaged in thinking about the benefits of robotics and AI in early childhood education.


ROYBI teaches children languages and STEM skills by playing, what are some examples of games that children can play?

We use different methodologies to deliver our educational content. Some lessons are only based on conversations. By using our voice recognition technology, Roybi Robot can understand if the child is saying the correct word or not. If the answer is not correct, it encourages the child to repeat using playful and compassionate messages.

Also, lessons alternate between fun and educational to games that can be played by interacting with the buttons on Roybi Robot’s charging plate. This creates more involvement and encourages children to move their hands, body and gaze and stay engaged.


Facial detection and emotional detection are the primary focus of ROYBI AI. Can you discuss some of the technologies behind this?

We use several technologies to deliver our content. One important AI component is voice recognition. Based on what the child says during the lessons, we can understand their progress and interest and create our reports for parents and educators. Facial detection is being used to initiate a conversation with a child to say “Hello.” And we use emotion detection as social-emotional support for the child while interacting with Roybi Robot, the educational robot.


ROYBI was recently featured on the Cover of TIME Magazine ‘As One Of The Best Inventions of 2019’. How did it feel to see your product on the cover of one of the top magazines in the world?

We were shocked, excited, honored, and overwhelmed at the same time. We knew we were on something big that would change the world but receiving such amazing recognition and even getting featured on the cover of the magazine gave us so much encouragement to continue our path even stronger!


There have been some pilots with ROYBI in classrooms. Can you share some of the feedback that you’ve received from teachers?

Our content is created by teachers, and we’re hoping to pilot in schools in the next academic year. The teachers who work with us to create the lessons, give us direct feedback on what is needed most to encourage children to engage with our content.


You’ve stated that you want to see every child in the world hold a ROYBI in their hands, do you believe that this could become a possibility if the classroom pilots are a success?

Absolutely! We are on our way to provide learning both at home and classroom settings and we want to change the way our children learn. To do that, we will ensure to provide our Roybi Robot to as many children as possible and as you can imagine it is an ambitious mission. To make this happen, we also invite future partners, delegates, governments, investors, mentors, and anyone who shares the same passion as us, to give us a hand, so together we can change the world for our children!


ROYBI recently acquired, what was the purpose behind this acquisition? Was it to simply offer more language options?

The recent acquisition happened as a strategic decision to make ROYBI’s technology even more accessible to all children around the world. With this acquisition, ROYBI becomes a leader in voice recognition AI that is specifically developed for children. As part of this proprietary technology, we can now accelerate language development efforts as well.


What would you tell women who feel that AI and tech are dominated by men and that it’s not an even playing field for them?

It is time to change this! Put your best effort at work. You got this!


Do you have any advice for female entrepreneurs who feel that it is more difficult for them to be taken seriously and to receive funding than their male counterparts?

The only limitation is in your own thoughts. There is no limit for what you can achieve no matter how difficult a situation may seem. You will find support from many people around you who share a similar passion as you. I encourage women to engage and involve themselves more in technology and how it is and will affect our future generations.

To make the change happen, first, we need to start  by ourselves and continue it together!


Do you have anything else that you would like to share?

As part of growing ROYBI globally, we are continuously looking for partnerships with schools, government entities, and foundations to help us make Roybi Robot and education more accessible around the world and to every child regardless of their location or family income status. If you believe you can help us in our mission, reach out to us at

Elnaz Sarraf is an inspiration to women and minorities, and shows that they too can be a success. Please visit the Roybi website to learn more or to order a Roybi Robot for a young child.

Spread the love
Continue Reading