Specialized publication KDNuggets has come up with a list of 10 Free online books about AI that it considers as essential reading. This is a fourth such list the publication has compiled, and the previous lists for 2017, 2018, and earlier in 2019 are also available. Here is a brief description of five titles in their current list, that represent an insight into more general elements and knowledge about AI.
by Ian Goodfellow, Yoshua Bengio, and Aaron Courville (November 2016, 800 pages).
As the title suggests, the book concentrates on deep learning and should be very useful for all who “are working on mastering deep learning with comprehensive mathematical and conceptual coverage of Monte Carlo methods, recurrent and recursive nets, autoencoders, and deep generative models.”
The Quest for Artificial Intelligence: A History of Ideas and Achievements
by Nils J. Nilsson (October 2009, 707 pages).
Possibly this book represents “the definitive history of a field that has captivated the imaginations of scientists, philosophers, and writers for centuries,” and its achievements so far. It traces the history of AI from the early dreams of eighteenth-century (and earlier) pioneers to the more successful work of today’s AI engineers. “The book’s many diagrams and easy-to-understand descriptions of AI programs you gain an understanding of how these and other AI systems work.”
Self-published by the author the book provides “a comprehensive resource on the topic that is crucial foundational knowledge for AI research and investigation.” AS KDNuggets explains, “neural networks are bio-inspired mechanisms for data processing that enable computers to learn in a way that is technically similar to a biological brain. These approaches can even generalize once solutions to enough problem instances are taught to the algorithms. “
Ethical Artificial Intelligence
by Bill Hibbard (November 2015, 177 pages).
Essentially a long-form article in a book format, it combines a number of “peer-reviewed papers and material to analyze the issues of ethical artificial intelligence.” Discussing a number of technical implications of AI, it also discusses “how future AI will differ from current AI, the politics of AI, and the ultimate use of AI to help understand the nature of the universe and our place in it.”
The Essential AI Handbook for Leaders
by Peltarion (59 pages)
This brief title is actually a marketing white paper for Peltarion, the company it originated from, but as KDNuggets note, “ it provides an important overview that business leaders should appreciate when leveraging AI.” The book’s intentional is “to help more people understand what AI is and how businesses and organizations can harness the technology,” and also “explains the fundamentals of AI, its potential benefits, and how businesses can make AI operational to create positive change.”
The complete list can be accessed here.
How Riiid! is Helping to Bring in New Era of AI-Education
Riiid is a South-Korean based series C startup company with $31.3 million in funding. The company develops and provides AI-powered solutions for the education sector, with a specialized focus on standardized testing.
The team at Riiid also conducts research in order to develop the AI models, which are then put on the company’s commercialized platform called “Santa.”
Santa and ITSs
Santa is a multi-platform English Intelligent Tutoring System (ITS), and it contains an AI tutor that provides a one-on-one curriculum for users.
ITSs are receiving a lot of attention in both the AI and education sectors, mostly due to their ability to provide students with personalized learning experiences through the use of deep-learning algorithms. ITSs suggest certain studying strategies for individuals.
Santa is a test prep platform for the Test of English for International Communication (TOEIC). The platform has over one million users in South Korea, specifically for the TOEIC.
After establishing its first United States office in early 2020, Riiid is looking to expand further and go beyond just the TOEIC, with a plan to target other test areas such as the ACT, SAT, and GMAT.
Recent Studies and Research
The company has two recent research papers, with one of the key findings being that deep learning algorithms can help improve student engagement.
One of the papers is titled “Prescribing Deep Attentive Score Prediction Attracts Improved Student Engagement,” which was accepted into the top-tier AI education conference Educational Data Mining (EDM).
The team ran a controlled A/B test on the ITS with two separate models based on collaborative filtering and deep-learning algorithms.
After being tested on 78,000 users, it was determined that the deep learning algorithms resulted in higher student morale, such as higher diagnostic test completion ratio and number of questions answered. It also resulted in more active engagement on Santa, shown through a higher purchase rate and improved total profit.
The second paper was titled “Deep Attentive Study Session Dropout Prediction in Mobile Learning Environment.” It was accepted by the global AI education conference CSEDU.
The paper focused on student engagement, and the team sought insight into student dropout prediction, specifically in regard to study session dropout prediction in a mobile learning environment. By observing this problem, the team believed there was a chance to increase student engagement.
The research suggested a method for maximizing learning effects by observing the dropout probability of individual users within the mobile learning environment. Their model is called DAS, or Deep Attentive Study Session Dropout Prediction in Mobile Learning Environment.
Through the use of deep attentive computations that extract information out of student interactions, Riiid’s model can accurately predict dropout probability.
The Santa platform was incorporated into the model, providing questions that were determined to have a low-dropout probability. By recommending certain questions, students were more likely to stay engaged and continue studying, rather than dropping out of the session.
According to the research team, “To the best of our knowledge, this is the first attempt to investigate study session dropout in a mobile learning environment.”
Riiid is one of the world’s leading startups for developing ITSs and providing AI-solutions in the education sector. As education and AI technology become more interconnected, companies like Riiid will usher in a new era of learning methods and systems, while trying to overcome the current challenges surrounding student engagement.
The Future of Speech Scoring – Thought Leaders
Across the world, the number of English language learners continues to rise. Educational institutions and employers need to be able to assess the English proficiency of language learners – in particular, their speaking ability, since spoken language remains among the most essential language abilities. The challenge, for both assessment developers and end users, is finding a way to do so that is accurate, fast and financially viable. As part of this challenge, scoring these assessments comes with its own set of factors, especially when we consider the different areas (speech, writing, etc.) one is being tested on. With the demand for English-language skills across the globe only expected to increase, what would the future of speech scoring need to look like in order to meet these needs?
The answer to that question, in part, is found in the evolution of speech scoring to date. Rating constructed spoken responses has historically been done using human raters. This process, however, tends to be expensive and slow, and has additional challenges including scalability and various shortcomings of human raters themselves (e.g., rater subjectivity or bias). As discussed in our book Automated Speaking Assessment: Using Language Technologies to Score Spontaneous Speech, in order to address these challenges, an increasing number of assessments now make use of automated speech scoring technology as the sole source of scoring or in combination with human raters. Before deploying automated scoring engines, however, their performance needs to be thoroughly evaluated, particularly in relation to the score reliability, validity (does the system measure what it is supposed to?) and fairness (i.e., the system should not introduce bias related to population subgroups such as gender or native language).
Since 2006, ETS’s own speech scoring engine, SpeechRater®, has been operationalized in the TOEFL® Practice Online (TPO) assessment (used by prospective test takers to prepare for the TOEFL iBT® assessment), and since 2019, SpeechRater has also been used, along with human raters, for scoring the speaking section of the TOEFL iBT® assessment. The engine evaluates a wide range of speaking proficiency for spontaneous non-native speech, including pronunciation and fluency, vocabulary range and grammar, and higher-level speaking abilities related to coherence and progression of ideas. These features are computed by using natural language processing (NLP) and speech processing algorithms. A statistical model is then applied to these features in order to assign a final score to a test taker’s response.
While this model is trained on previously observed data scored by human raters, it is also reviewed by content experts to maximize its validity. If a response is found to be non-scorable due to audio quality or other issues, the engine can flag it for further review to avoid generating a potentially unreliable or invalid score. Human raters are always involved in the scoring of spoken responses in the high-stakes TOEFL iBT speaking assessment.
As human raters and SpeechRater are currently used together to score test takers’ responses in high-stakes speaking assessments, both play a part in what the future of scoring English language proficiency can be. Human raters have the ability to understand the content and discourse organization of a spoken response in a deep way. In contrast, automated speech scoring engines can more precisely measure certain detailed aspects of speech, such as fluency or pronunciation, exhibit perfect consistency over time, can reduce overall scoring time and cost, and are more easily scaled to support large testing volumes. When human raters and automated speech scoring systems are combined, the resulting system can benefit from the strengths of each scoring approach.
In order to continuously evolve automated speech scoring engines, research and development needs to focus on the following aspects, among others:
- Building automatic speech recognition systems with higher accuracy: Since most features of a speech scoring system rely directly or indirectly on this component of the system that converts the test taker’s speech to a text transcription, highly accurate automatic speech recognition is essential for obtaining valid features;
- Exploration of new ways to combine human and automated scores: In order to take full advantage of the respective strengths of human rater scores and automated engine scores, more ways of combining this evidence need to be explored;
- Accounting for abnormalities in responses, both technical and behavioral: High-performing filters capable of flagging such responses and excluding them from automated scoring are necessary to help ensure the validity and reliability of the resulting assessment scores;
- Assessment of spontaneous or conversational speech that occurs most often in day-to-day life: While automated scoring of such interactive speech is an important goal, these items present numerous scoring challenges, including overall evaluation and scoring;
- Exploring deep learning technologies for automated speech scoring: This relatively recent paradigm within machine learning has produced substantial performance increases on many artificial intelligence (AI) tasks in recent years (e.g., automatic speech recognition, image recognition), and therefore it is likely that automated scoring also may benefit from using this technology. However, since most of these systems can be considered “black-box” approaches, attention to the interpretability of the resulting score will be important to maintain some level of transparency.
To accommodate a growing and changing English-language learner population, next-generation speech scoring systems must expand automation and the range of what they are able to measure, enabling consistency and scalability. That is not to say the human element will be removed, especially for high-stakes assessments. Human raters will likely remain essential for capturing certain aspects of speech that will remain hard to evaluate accurately by automated scoring systems for a while to come, including the detailed aspects of spoken content and discourse. Using automated speech scoring systems in isolation for consequential assessments also runs the risk of not identifying problematic responses by test takers— for instance, responses that are off-topic or plagiarized, and, as a consequence, can lead to reduced validity and reliability. Using both human raters and automated scoring systems in combination may be the best way for scoring speech in high-stakes assessments for the foreseeable future, particularly if spontaneous or conversational speech is evaluated.
ETS works with education institutions, businesses and governments to conduct research and develop assessment programs that provide meaningful information they can count on to evaluate people and programs. ETS develops, administers and scores more than 50 million tests annually in more than 180 countries at more than 9,000 locations worldwide. We design our assessments with industry-leading insight, rigorous research and an uncompromising commitment to quality so that we can help education and workplace communities make informed decisions. To learn more visit ETS.
Researchers Develop New AI to Help Create Tutoring Systems
Researchers from Carnegie Mellon University have demonstrated how they can build intelligent tutoring systems. These systems are effective at teaching various subjects, including algebra and grammar.
The researchers used a new method that relies on artificial intelligence in order to allow a teacher to teach a computer. The wording makes this method seem confusing, but think of it as a computer being taught how to teach by a human teacher. The computer can be taught by the human teacher showing it how to solve certain problems, such as multicolumn addition. If the computer gets the problem wrong, the teacher can correct it.
Solving Problems On Its Own
One of the interesting parts of this method is that the computer system is able to not only teach and solve the problems how it was taught, but it can also solve all other problems in the topic by generalizing. This means that the computer can end up solving a problem outside of the ways the teacher taught it to.
Daniel Weitekamp III is a Ph.D student in CMU’s Human-Computer Interaction Institute (HCII).
“A student might learn one way to do a problem and that would be sufficient,” Weitekamp said. “But a tutoring system needs to learn every kind of way to solve a problem. It needs to learn how to teach problem solving, not just how to solve problems.”
The challenge that Weitekamp explains is one of the greatest in the development of AI-based tutoring systems. Newly developed intelligent tutoring systems can track student progress, help determine what to do next, and help students develop new skills by selecting effective practice problems.
The Development of AI-Based Tutoring Systems
Ken Koedinger is a professor of human-computer interaction and psychology. Koedinger was one of the early developers of intelligent tutors, and working with others, production rules were programmed by hand. According to Koedinger, each hour of tutored instruction took 200 hours of development. Eventually, the group developed a more effective method, which demonstrated all of the possible ways to solve a problem. This took the 200 hours down to 40 or 50, but it is extremely difficult to demonstrate all of the possible solutions to some patterns.
Koedinger has said that the new method could end up allowing a teacher to develop a 30-minute lesson in the same amount of time.
“The only way to get to the full intelligent tutor up to now has been to write these AI rules,” Koedinger said. “But now the system is writing those rules.”
In the new method, a machine learning program is used to simulate the ways in which students learn. A teaching interface was created by Weitekamp, and it utilizes a “show-and-correct” process for programming.
While the method was demonstrated with multicolumn addition, the machine learning engine that is used can be applied to other subjects, such as equation solving, fraction addition, chemistry, English grammar, and science experiment environments.
One of the main goals is for this method to allow teachers to construct their own computerized lessons, without the need of an AI programmer. This allows teachers to apply their own personal views on how to teach or which methods to use.
Weitekamp, Koedinger, and HCII System Scientist Erik Harpstead authored the paper describing the method. It was accepted by the Conference of Human Factors in Computing Systems (CHI 2020). The conference was originally planned for this month, but the COVID-19 pandemic forced it to be canceled. The paper can now be found in the conference proceedings, located in the Association for Computing Machinery’s Digital Library.
The Institute of Education Sciences and Google helped support the research.
- Robots Walk Faster With Newly Developed Flexible Feet
- Bradford Newman, Chair of North America Trade Secrets Practice – Interview Series
- Eugene Terekhov, CEO of AiBUY – Interview Series
- How Riiid! is Helping to Bring in New Era of AI-Education
- Vincent Scesa – Autonomous Vehicle Program Manager, EasyMile – Interview Series