Connect with us

Startups

Japanese Startup Creates Smart Mask Capable of Translating Eight Languages

Published

 on

Credit: Donut Robotics

Shortly after the outbreak of the COVID-19 pandemic, the use of face masks became popular throughout the globe. While governments were trying to handle all of the chaos, companies like Donut Robotics were working on new innovations applicable to the time. The Japanese startup created a high-tech version of the cloth face covering with their C-Face Smart mask

C-Face Smart mask is designed to aid in communication and social distancing during times like this, but it has many other non-pandemic related capabilities as well, such as the ability to translate speech into eight different languages. 

According to a statement by the company, “C-face is the world’s first smart mask that works with smartphones … developed by applying robotics technology. We hope this device will be useful in a society where people naturally practice social distancing.”

Cinnamon Robot and Donut Robotics

Donut Robotics did not create the mask during the pandemic, but they did adapt it to use during this time. It’s original intended use was as translation software for the company’s robot called Cinnamon. However, the project was put on hold due to COVID-19, which resulted in the team turning their focus to the face mask. 

The company was founded in Kitakyushu City, in Fukuoka prefecture by CEO Taisuke Ono and engineer Takafumi Okabe. The pair wanted to “change the world with small and mobile communication robots.”

After receiving venture capital investment, Ono and Okabe applied to an initiative called Haneda Robotics Lab, which aimed to use robots to provide services to visitors at Tokyo’s Haneda Airport. 

That is when the company developed the Cinnamon robot as one of four translation robot prototypes that were selected by the initiative in 2016. According to Haneda Robotics Lab, Cinnamon was chosen due to its user-friendly design and impressive aesthetics, as well as the software’s ability to operate efficiently in loud environments.

Following the company’s success with Cinnamon, it was moved to Tokyo with an expanded team. 

According to Ono, the software relies on Japanese language-specialized machine learning that was developed by translation experts. 

Ono says that when it comes to Japanese language users, “the technology is better than Google API, or other popular technologies,” due to other competitor apps mostly translating to and from English. 

Once the COVID-19 pandemic hit Asia earlier this year, the team switched gears and focused on adapting its software to come up with a solution. 

“We were running short of money and wondering how to keep the company going,” Ono says. 

In the past two months, Donut Robotics has raised more than $800,000 (Dh2.94 million) through Fundinno, a Japanese crowdfunding platform. 

The company plans on launching the product in Japan in December, starting with 5,000 and 10,000 masks.  The cost is expected to be between $40 and $50, and the company will charge a monthly fee for translation and transcription services. Donut Robotics then plans on releasing it in other parts of the world starting in the second quarter of next year. 

C-Face Smart Mask

The C-Face Smart mask can translate from Japanese into English, Chinese, French, Spanish, Korean, Vietnamese, and Indonesian.

The face mask has cutouts in the front in order to keep the air flowing, so it is still necessary for users to wear a standard face mask underneath. It is constructed out of white plastic and silicone, and the impressive technology is Bluetooth capable with an embedded microphone that links to the user’s smartphone. 

The Bluetooth chip can connect to smartphones up to 32 feet away, and Ono hopes that the technology will aid in social distancing during the pandemic, especially in environments like hospitals and offices. 

“We still have many situations where we have to meet in person, Ono says. “In this new normal…the mask and the app are very helpful.” 

 

Spread the love

Natural Language Processing

AI Startup Diffbot Reads Entire Public Internet To Pursue Fact-Based Text Generation

mm

Published

 on

The recent advances in natural language processing and text generation accomplished by OpenAI through their GPT-2 and GPT-3 language models have been impressive, able to generate text that looks like it may have genuinely been written by a human. Unfortunately, although these models excel at writing natural-sounding text, they are not equipped to write text that is factual. Advanced language models cobble sentences together from words that make the most sense in context, without paying any attention to the veracity of the claims within the generated text. As reported by MIT technology review, a startup known as Diffbot aims to solve this problem by having an AI extract as many facts as it can from the internet.

Diffbot is a startup hoping to make AI more useful for practical text generation tasks like auto-populating spreadsheets and autocompleting sentences or code. In order for the text generated by the AI to be reliable, the AI itself needs to be trustworthy and it has to have some concept of factual vs. fictional statements. Diffbot’s approach to giving a text generation program the ability to generate factual statements begins by collecting massive amounts of text from practically the entire public web. Diffbot parses text in multiple languages and splits up text into sets of fact-based triplets, with the subject, object, and verb of a given fact being used to link one concept to another. For instance, it might represent facts regarding Bill Gates and Microsoft like this:

Bill Gates is the founder of Microsoft. Microsoft is a computer technology company.

Diffbot takes all of these short factoids and joins them together to create a knowledge graph. Knowledge graphs create webs of relationships between concepts, often along with a reasoner that assists in the creation of new conclusions based on these relationships. To put that another way, knowledge-graphs use data interlinking, and they can help machine learning algorithms to model knowledge domains. Knowledge graphs have actually been around for decades and many early AI researchers considered them important tools for allowing AI to understand the human world. However, knowledge graphs were typically created by hand which is a difficult,  pain-staking process. Automating the creation of knowledge graphs could allow AIs to attain a much greater, contextual understanding of concepts and produce text that is fact-based.

Google started using knowledge graphs a few years ago to aid in providing summaries of information when a popular topic is searched for. The knowledge graph is used to pull the most relevant factoids and represent them as a summary. Diffbot wants to do the same thing for every topic, not just the most popular ones. This requires building an absolutely massive knowledge graph, compiled by crawling the entire public web, something that only Google and Microsoft do otherwise. Diffbot scans the whole web and updates the knowledge graph with new information every four or five days, and over the course of a month it adds somewhere between 100 million to 150 million entries.

Diffbot doesn’t read the text of a website like normal web-crawlers, rather it uses computer vision algorithms to extract the raw pixels of a web page and pull video, image, article, and discussion data from the page. It identifies the key elements of the webpage and then extracts facts in a variety of languages, in adherence to the three-part factoid schema.

Currently, Diffbot offers both paid and free access to its knowledge graph. While researchers may access the graph for free, companies like DuckDuckGo and Snapchat use it to summarize text and extract snippets of trending news items. Meanwhile, Nike and Adidas utilize the platform to find sites selling counterfeit products, which is possible because Diffbot is able to ascertain which sites are actually selling shoes, not just having discussions about them.

In the future, Diffbot plans to expand its capabilities and add a natural-language interface to the platform, capable of answering almost any question you asked it and backing up those answers with sources. Ideally, the capabilities of Diffbot would be combined with a powerful language synthesis model like GPT-3.

Spread the love
Continue Reading

Interviews

Netanel Eliav, CEO of Sightbit – Interview Series

mm

Published

 on

Netanel Eliav is the CEO of Sightbit, a global development project that harnesses advances in AI and image recognition technology to prevent drowning and save lives.

How did the concept for Sightbit originate?

Friends Netanel Eliav and Adam Bismut were interested in using tech to improve the world. On a visit to the beach, their mission became clear. Adam noticed the lack of tech support for lifeguards, who monitored hard-to-see swimmers with binoculars.

The system uses standard cameras that cover a defined area and transmits that information in real-time to lifeguards. What type of range are the cameras capable of? Also, how much of the accuracy becomes reduced with greater range?

Sightbit’s innovation is in the software. We work with various off-the-shelf cameras of different ranges, customizing camera setup to meet the needs of each customer and to ensure that the desired area is protected.

At Israel’s Palmahim Beach, where we are conducting a pilot, we built a dedicated cement platform that holds three cameras. Each camera covers 300 meters out to sea in normal conditions, the range required at Palmahim Beach.

A monitor displays a panoramic view of the water and beach, like a security camera display. A dashboard is superimposed over the video feed. Sightbit alerts appear as flashing boxes around individuals and hazards. Multiple views from different camera vantage points are available on a single screen. When a lifeguard clicks on an alert, the program zooms in, allowing the lifeguard to see the swimmer much clear than is possible with the naked eye.  Four additional cameras will be installed shortly.


Can you discuss some of the computer vision challenges behind being able to differentiate between a human swimming and a human struggling to stay afloat?

We can detect some of the signs of distress base on the following: Location of person who might be caught in a rip current, located far from shore or in a dangerous area. Movement/lack of movement or lack of movement. Our system can distinguish swimmers bobbing up and down in the water, floating face down, or waving for help as signs of distress.

Sightbit has developed software that incorporates AI, based convolutional neural networks, image detection, and other proprietary algorithms to detect swimmers in distress and avoid false positives.

What are the risk factors for false positives such as misidentifying someone as drowning, or false negatives such as misidentifying a potential drowning?

The drowning detection feature sometimes generates a low-level warning when a swimmer has remained underwater for long stretches of time.

Like lifeguards, Sightbit primarily detects swimmers in distress. A drowning alert is an alert that has come too late. We focus on dangerous situations that can lead to drowning, allowing for de-escalation before they get out of control. For example, we warn when swimmers are caught in rip currents so that lifeguards or other rescue personnel can reach the individual in time.

Our real-time alerts include:

  • Swimmers in distress
  • Rip currents
  • Children alone in or by the water
  • Water vessels entering the swim area
  • Swimmers entering dangerous areas. This may be choppy water, deep water, are hazardous areas alongside breakwater structures or rocks.
  • Drowning incidents – soon to be deployed at Palmahim
  • And other situations

What type of training is needed to use the Sightbit system?

No special training is needed. Sightbit’s user interface takes five minutes to learn. We designed the system with lifeguards to ensure that it is easy for them to use and master.

Can you discuss what happens in the backend once an alert is triggered for a potential drowning?

The beach cameras feed into a GPU for video analysis and a CPU for analytics. When the CPU detects a threat, it generates an alert. This alert is customized to customer needs. At Palmahim, we sound alarms and generate visual alerts on the screen. Sightbit can also be configured to call emergency rescue.

Could you discuss some of your current pilot programs and the types of results that have been achieved?

Sightbit is conducting a pilot at Palmahim Beach in partnership with the Israel Nature and Parks Authority. The system is installed at the Palmahim lifeguard tower and is in use by lifeguards (see above for details about camera placement, warnings, and the Sightbit monitor). The pilot went live at the end of May.

At Palmahim, three lifeguards, all stationed at one central tower, guard the one-kilometer beach. Sightbit provides instantaneous alerts when swimmers are in danger and camera views of swimmers far from the tower.

Prior to the pilot partnership at Palmahim Beach, we conducted proof-of-concept testing at beaches throughout Israel at the invitation of local authorities.

How have government officials reacted so far when introduced to the technology?

Extreme enthusiasm! Cities and major government-run beaches as well as private beaches in Israel, the United States, the Balkans, and Scandinavia have invited Sightbit to conduct pilots. We have been granted permissions by all relevant government bodies.

Is there anything else that you would like to share about Sightbit?

Yes!

  1. We are currently raising funds as part of a seed round. Investors around the world have reached out to us, and we have already received funding offers. We previously received pre-seed funding from the Cactus Capital VC fund in Israel.

 

  1. Long-Term Potential: People are not optimized for tracking dozens, and certainly not hundreds, of swimmers from a watchtower. Looking long term, Sightbit can enable agencies to guard more shoreline at lower costs by using Sightbit systems for front-line monitoring. Lifeguards can be assigned to headquarters or patrol duty, allowing teams to respond faster to incidents anywhere along the beach. This is lifesaving. Currently, even during peak summer months, lifeguards monitor less than half of the shoreline at designated public swimming beaches.

 

  1. Sightbit can safeguard sites 24/7, all year round. Where there is no lifeguard service, Sightbit alerts emergency dispatch or local rescue services when a swimmer is in danger (for example, a swimmer swept out to sea in a rip current). Sightbit software can also pinpoint and track a swimmer’s location and deliver rescue tubes via small drones.

 

  1. Sightbit can bring monitoring to many different aquatic sites that do not currently employ lifeguards. With Sightbit, aquatic work sites, marinas, reservoirs, and other sites can benefit from water safety alerts.

Sightbit also provides risk analytics and management insights, which allow customers to anticipate hazards in advance and improve operations. Customers can track water and weather conditions, crowding, and more.

Thank you for the interview regarding this important project, readers who wish to learn more should visit of Sightbit.

Spread the love
Continue Reading

Investments

What are the main obstacles that are preventing AI startups from scaling up? – Thought Leaders

mm

Published

 on

By Salvatore Minetti, CEO, Fountech.Ventures

The promise of artificial intelligence (AI) has undoubtedly captured the imagination of many investors over the past decade. Fuelled by strong public interest, the technology has become a real force for good, promising to deliver solutions with potential to solve some of the world’s biggest issues.

Relative to other emerging technologies, AI companies were the leading investment category globally in 2019, securing over $23 billion in financing according to Tech Nation.

However, AI companies require more than just investment to truly thrive in the current climate. Indeed, the issue is not so much the shortage of start-ups as it is the shortage of scale-ups.

To truly push this discipline forward, it is time that we ramp up our efforts to nurture only the most innovative businesses towards long-term success, so that they can become formidable companies. This begs the question: what are the obstacles holding AI businesses back from growing beyond the start-up phase?

Determining ‘true’ AI businesses

It is no secret that the tag ‘AI’ has become ubiquitous, with companies using the term left, right and centre in order to secure investment. The problem with this is that some companies without AI at their core are holding back progress in the sector at large, hindering the development of progressive solutions.

These issues with semantics make it more difficult for investors to determine which businesses actually use ‘true’ AI, and which don’t. Indeed, a recent MMC Ventures report revealed that two fifths of Europe’s AI start-ups don’t actually use AI in any of their products. Examples like this serve to highlight how pervasive the misuse of the term is. Undoubtedly, conflating the meaning of a product or service can not only lead to overspending and poor execution, but also a business’ ultimate downfall when it is outcompeted by those with more clarity and focus.

Investors would therefore do well to avoid this fate by vetting companies thoroughly early on in the process. This can be achieved by asking key questions, such as ‘does this company derive its competitive advantage from the use of AI?’, and ‘will this company propel the sector forward?’. This way, resource can be spent more valuably on companies with scalable technical solutions and real competitive edge.

Start-up stumbling blocks

In the deep-tech arena, ambitious young teams generally have the determination and technical expertise required to design and create an innovative product. However, powerful concepts aren’t always enough to guarantee the success of a new business venture, and too much focus on the technology could stymie its progress.

The lack of clear metrics for AI startups is particularly challenging; it is difficult to measure what makes a ‘good’ AI company. The hype surrounding AI and its growing popularity has also given rise to fierce competition, which means that founders need to be particularly attuned to the obstacles they will face.

Some fundamentals are important for every business. For one, entrepreneurs must be able to demonstrate that they are addressing a large and important problem – and show why they are in the best position to solve it. Perhaps even more importantly, businesses need to establish whether people will be willing to pay good money for their solution.

AI start-ups will generally fall at many of the same hurdles as their more traditional counterparts. Another CB Insights report revealed the most common reasons that budding entrepreneurs might fail on their way up to the top, which included a lack of market need for the product, not having the right team, and being out-competed by other businesses.

The first of these demands particular attention: the blight of so many tech startups is that they build the product, and then hope that somebody wants it. A failure to take the appropriate steps at the outset to understand the potential fit and demand means that the final product doesn’t ultimately capture the attention of the target market.

For AI businesses, however, there are additional elements that must also be considered. The team should be able to demonstrate that their AI is truly adding value to the data they are using – and not just being used as a smokescreen. Does the AI help explain patterns in the data, derive accurate explanations, identify important trends and ultimately optimize the use of the information?

If not, they must question whether they should really be selling themselves as an AI startup. There is a real risk that resources will be spent needlessly on building and marketing a solution that does not truly solve a problem using artificial intelligence. Ultimately, such businesses are likely to lose their vision over time and will fail to live up to the mark they might have envisaged for themselves. They may also struggle to secure funding; after all, most VCs will not want to risk an investment into a technology that is ambiguous.

Young teams also tend to face roadblocks when it comes to the financial side of things: AI start-ups are either under-funded from the outset or burn more cash than necessary. To achieve sustainable growth, fledgling companies need to be able to plan beyond the development budget and create a scalable commercial model that will stand the test of time. Granted, this is no easy feat with limited business nous.

Nurturing AI start-ups to success

Many of these missteps boil down to the fact that start-ups often fall short where appropriate mentorship and business acumen are concerned. Indeed, most would benefit from some additional expertise to navigate common stumbling blocks.

It is fundamental therefore that company founders work with third-party advisors to compensate for any gaps in knowledge. Young teams need mentors to help manoeuvre unfamiliar territory, and to provide additional legal, financial, and logistical guidance.

Ultimately, simply financing a project just isn’t enough. It is essential that we work to provide a more holistic model to support fledgling AI start-ups, so that companies are set on the path to commercially scalable projects. It is only by providing specialist support and assistance with the more fundamental aspects of business – as well as access to talent, capital and peer networks – that we can really push the needle forward in pioneering AI technology.

Spread the love
Continue Reading