Connect with us

Education

A New Report Claims Artificial Intelligence Skills Will Be Most In-Demand

mm

Published

 on

A New Report Claims Artificial Intelligence Skills Will Be Most In-Demand

Udemy, the largest online learning source, just published its Udemy for Business  2020 Workplace Learning Trends Report: The Skills of the Future (48 pp., PDF, opt-in). As Forbes noticed, the report claims that it is now key “to prepare workforces for the future of work in an AI-enabled world.” The report states that “In the world of finance, investment funds managed by AI and computers account for 35% of America’s stock market today,” citing a recent article in The EconomistThe rise of the financial machines.

For their part, in the report, Udemy notes that AI is reshaping the world of work. The organization notes that 65% of the leaders cited that AI and robotics are an important or very important issue in human capital. Still, only  26% of the organizations Udemy surveyed are ready or very ready to dress the impact of these new technologies.

Udemy notes five key trends in 2020:

Trend 1 – AI will go mainstream in 2020

Trend 2 – 2020 is about realizing the full potential of humans and machines

Trend 3 – Learning & development is starting to tackle reskilling the workforce

Trend 4 – Organizations are building a data-driven culture

Trend 5 – Countries across the world are upskilling in highly coveted tech skills

Detailing their predictions concerning specifically AI and robotics, the Udemy report also notes that TensorFlow, an end-to-end open-source platform for machine learningis the most popular tech skill of the last three years, exponentially increasing between 2016 and 2019.”

Its other two key projections according to Forbes are Udemy’s view that there will be a “robust demand for AI and data science skills, in addition to web development frameworks, cloud computing, and IT certifications, including AWS, CompTIA & Docker.” Also, SAP expertise (knowledge and qualifications of the  enterprise software to manage business operations and customer relations) “is projected to be the fastest-growing process-related skill set in 2020.”

In more detailed projections, it is said that “TensorFlow, OpenCV, and neural networks are the foundational skills many data scientists are pursuing and perfecting today to advance their AI-based career strategies. “ All these skills are a basis for understanding and developing artificial intelligence apps and platforms.

Talking about TensorFlow, Forbes notes that it is “a free and open-source software library for dataflow and differentiable programming across a range of tasks. It is a symbolic math library and is also used for machine learning applications such as neural networks. “

The other closely connected category in which Udemy found strong interest is Robotic Process Automation (RPA) and Business Process Management (BPM). As explained, Robotic Process Automation (RPA) refers to the use of process automation tools to quickly replicate how human beings perform routine daily office work using popular productivity apps including Microsoft Excel, databases, or web applications.

 

Spread the love

Former diplomat and translator for the UN, currently freelance journalist/writer/researcher, focusing on modern technology, artificial intelligence, and modern culture.

Education

How Riiid! is Helping to Bring in New Era of AI-Education

Published

on

How Riiid! is Helping to Bring in New Era of AI-Education

Riiid is a South-Korean based series C startup company with $31.3 million in funding. The company develops and provides AI-powered solutions for the education sector, with a specialized focus on standardized testing.

The team at Riiid also conducts research in order to develop the AI models, which are then put on the company’s commercialized platform called “Santa.”

Santa and ITSs

Santa is a multi-platform English Intelligent Tutoring System (ITS), and it contains an AI tutor that provides a one-on-one curriculum for users.

ITSs are receiving a lot of attention in both the AI and education sectors, mostly due to their ability to provide students with personalized learning experiences through the use of deep-learning algorithms. ITSs suggest certain studying strategies for individuals.

Santa is a test prep platform for the Test of English for International Communication (TOEIC). The platform has over one million users in South Korea, specifically for the TOEIC.

After establishing its first United States office in early 2020, Riiid is looking to expand further and go beyond just the TOEIC, with a plan to target other test areas such as the ACT, SAT, and GMAT.

Recent Studies and Research

The company has two recent research papers, with one of the key findings being that deep learning algorithms can help improve student engagement.

One of the papers is titled “Prescribing Deep Attentive Score Prediction Attracts Improved Student Engagement,” which was accepted into the top-tier AI education conference Educational Data Mining (EDM).

The team ran a controlled A/B test on the ITS with two separate models based on collaborative filtering and deep-learning algorithms.

After being tested on 78,000 users, it was determined that the deep learning algorithms resulted in higher student morale, such as higher diagnostic test completion ratio and number of questions answered. It also resulted in more active engagement on Santa, shown through a higher purchase rate and improved total profit.

The second paper was titled “Deep Attentive Study Session Dropout Prediction in Mobile Learning Environment.” It was accepted by the global AI education conference CSEDU.

The paper focused on student engagement, and the team sought insight into student dropout prediction, specifically in regard to study session dropout prediction in a mobile learning environment. By observing this problem, the team believed there was a chance to increase student engagement.

The research suggested a method for maximizing learning effects by observing the dropout probability of individual users within the mobile learning environment. Their model is called DAS, or Deep Attentive Study Session Dropout Prediction in Mobile Learning Environment.

Through the use of deep attentive computations that extract information out of student interactions, Riiid’s model can accurately predict dropout probability.

The Santa platform was incorporated into the model, providing questions that were determined to have a low-dropout probability. By recommending certain questions, students were more likely to stay engaged and continue studying, rather than dropping out of the session.

According to the research team, “To the best of our knowledge, this is the first attempt to investigate study session dropout in a mobile learning environment.”

Riiid is one of the world’s leading startups for developing ITSs and providing AI-solutions in the education sector. As education and AI technology become more interconnected, companies like Riiid will usher in a new era of learning methods and systems, while trying to overcome the current challenges surrounding student engagement.

 

Spread the love
Continue Reading

Education

The Future of Speech Scoring – Thought Leaders

mm

Published

on

The Future of Speech Scoring - Thought Leaders

Across the world, the number of English language learners continues to rise. Educational institutions and employers need to be able to assess the English proficiency of language learners – in particular, their speaking ability, since spoken language remains among the most essential language abilities. The challenge, for both assessment developers and end users, is finding a way to do so that is accurate, fast and financially viable. As part of this challenge, scoring these assessments comes with its own set of factors, especially when we consider the different areas (speech, writing, etc.) one is being tested on. With the demand for English-language skills across the globe only expected to increase, what would the future of speech scoring need to look like in order to meet these needs?

The answer to that question, in part, is found in the evolution of speech scoring to date. Rating constructed spoken responses has historically been done using human raters. This process, however, tends to be expensive and slow, and has additional challenges including scalability and various shortcomings of human raters themselves (e.g., rater subjectivity or bias). As discussed in our book Automated Speaking Assessment: Using Language Technologies to Score Spontaneous Speech, in order to address these challenges, an increasing number of assessments now make use of automated speech scoring technology as the sole source of scoring or in combination with human raters. Before deploying automated scoring engines, however, their performance needs to be thoroughly evaluated, particularly in relation to the score reliability, validity (does the system measure what it is supposed to?) and fairness (i.e., the system should not introduce bias related to population subgroups such as gender or native language).

Since 2006, ETS’s own speech scoring engine, SpeechRater®,  has been operationalized in the TOEFL® Practice Online (TPO) assessment (used by prospective test takers to prepare for the TOEFL iBT® assessment), and since 2019, SpeechRater has also been used, along with human raters, for scoring the speaking section of the TOEFL iBT® assessment. The engine evaluates a wide range of speaking proficiency for spontaneous non-native speech, including pronunciation and fluency, vocabulary range and grammar, and higher-level speaking abilities related to coherence and progression of ideas. These features are computed by using natural language processing (NLP) and speech processing algorithms. A statistical model is then applied to these features in order to assign a final score to a test taker’s response.

While this model is trained on previously observed data scored by human raters, it is also reviewed by content experts to maximize its validity. If a response is found to be non-scorable due to audio quality or other issues, the engine can flag it for further review to avoid generating a potentially unreliable or invalid score. Human raters are always involved in the scoring of spoken responses in the high-stakes TOEFL iBT speaking assessment.

As human raters and SpeechRater are currently used together to score test takers’ responses in high-stakes speaking assessments, both play a part in what the future of scoring English language proficiency can be. Human raters have the ability to understand the content and discourse organization of a spoken response in a deep way. In contrast, automated speech scoring engines can more precisely measure certain detailed aspects of speech, such as fluency or pronunciation, exhibit perfect consistency over time, can reduce overall scoring time and cost, and are more easily scaled to support large testing volumes. When human raters and automated speech scoring systems are combined, the resulting system can benefit from the strengths of each scoring approach.

In order to continuously evolve automated speech scoring engines, research and development needs to focus on the following aspects, among others:

  • Building automatic speech recognition systems with higher accuracy: Since most features of a speech scoring system rely directly or indirectly on this component of the system that converts the test taker’s speech to a text transcription, highly accurate automatic speech recognition is essential for obtaining valid features;
  • Exploration of new ways to combine human and automated scores: In order to take full advantage of the respective strengths of human rater scores and automated engine scores, more ways of combining this evidence need to be explored;
  • Accounting for abnormalities in responses, both technical and behavioral: High-performing filters capable of flagging such responses and excluding them from automated scoring are necessary to help ensure the validity and reliability of the resulting assessment scores;
  • Assessment of spontaneous or conversational speech that occurs most often in day-to-day life: While automated scoring of such interactive speech is an important goal, these items present numerous scoring challenges, including overall evaluation and scoring;
  • Exploring deep learning technologies for automated speech scoring: This relatively recent paradigm within machine learning has produced substantial performance increases on many artificial intelligence (AI) tasks in recent years (e.g., automatic speech recognition, image recognition), and therefore it is likely that automated scoring also may benefit from using this technology. However, since most of these systems can be considered “black-box” approaches, attention to the interpretability of the resulting score will be important to maintain some level of transparency.

To accommodate a growing and changing English-language learner population, next-generation speech scoring systems must expand automation and the range of what they are able to measure, enabling consistency and scalability. That is not to say the human element will be removed, especially for high-stakes assessments. Human raters will likely remain essential for capturing certain aspects of speech that will remain hard to evaluate accurately by automated scoring systems for a while to come, including the detailed aspects of spoken content and discourse. Using automated speech scoring systems in isolation for consequential assessments also runs the risk of not identifying problematic responses by test takers— for instance, responses that are off-topic or plagiarized, and, as a consequence, can lead to reduced validity and reliability. Using both human raters and automated scoring systems in combination may be the best way for scoring speech in high-stakes assessments for the foreseeable future, particularly if spontaneous or conversational speech is evaluated.

Written by: Keelan Evanini, Director of Speech Research, ETS & Klaus Zechner, Managing Senior Research Scientist, Speech, ETS

ETS works with education institutions, businesses and governments to conduct research and develop assessment programs that provide meaningful information they can count on to evaluate people and programs. ETS develops, administers and scores more than 50 million tests annually in more than 180 countries at more than 9,000 locations worldwide. We design our assessments with industry-leading insight, rigorous research and an uncompromising commitment to quality so that we can help education and workplace communities make informed decisions. To learn more visit ETS.

Spread the love
Continue Reading

Education

Researchers Develop New AI to Help Create Tutoring Systems

Published

on

Researchers Develop New AI to Help Create Tutoring Systems

Researchers from Carnegie Mellon University have demonstrated how they can build intelligent tutoring systems. These systems are effective at teaching various subjects, including algebra and grammar. 

The researchers used a new method that relies on artificial intelligence in order to allow a teacher to teach a computer. The wording makes this method seem confusing, but think of it as a computer being taught how to teach by a human teacher. The computer can be taught by the human teacher showing it how to solve certain problems, such as multicolumn addition. If the computer gets the problem wrong, the teacher can correct it. 

Solving Problems On Its Own

One of the interesting parts of this method is that the computer system is able to not only teach and solve the problems how it was taught, but it can also solve all other problems in the topic by generalizing. This means that the computer can end up solving a problem outside of the ways the teacher taught it to. 

Daniel Weitekamp III is a Ph.D student in CMU’s Human-Computer Interaction Institute (HCII). 

“A student might learn one way to do a problem and that would be sufficient,” Weitekamp said. “But a tutoring system needs to learn every kind of way to solve a problem. It needs to learn how to teach problem solving, not just how to solve problems.”

The challenge that Weitekamp explains is one of the greatest in the development of AI-based tutoring systems. Newly developed intelligent tutoring systems can track student progress, help determine what to do next, and help students develop new skills by selecting effective practice problems. 

The Development of AI-Based Tutoring Systems

Ken Koedinger is a professor of human-computer interaction and psychology. Koedinger was one of the early developers of intelligent tutors, and working with others, production rules were programmed by hand. According to Koedinger, each hour of tutored instruction took 200 hours of development. Eventually, the group developed a more effective method, which demonstrated all of the possible ways to solve a problem. This took the 200 hours down to 40 or 50, but it is extremely difficult to demonstrate all of the possible solutions to some patterns. 

Koedinger has said that the new method could end up allowing a teacher to develop a 30-minute lesson in the same amount of time. 

“The only way to get to the full intelligent tutor up to now has been to write these AI rules,” Koedinger said. “But now the system is writing those rules.”

In the new method, a machine learning program is used to simulate the ways in which students learn. A teaching interface was created by Weitekamp, and it utilizes a “show-and-correct” process for programming.

While the method was demonstrated with multicolumn addition, the machine learning engine that is used can be applied to other subjects, such as equation solving, fraction addition, chemistry, English grammar, and science experiment environments. 

One of the main goals is for this method to allow teachers to construct their own computerized lessons, without the need of an AI programmer. This allows teachers to apply their own personal views on how to teach or which methods to use. 

Weitekamp, Koedinger, and HCII System Scientist Erik Harpstead authored the paper describing the method. It was accepted by the Conference of Human Factors in Computing Systems (CHI 2020). The conference was originally planned for this month, but the COVID-19 pandemic forced it to be canceled. The paper can now be found in the conference proceedings, located in the Association for Computing Machinery’s Digital Library.

The Institute of Education Sciences and Google helped support the research. 

 

Spread the love
Continue Reading