Connect with us

Interviews

Joy Mustafi, Chief Data Scientist of Aviso, Inc – Interview Series

mm

Published

 on

Joy Mustafi, Chief Data Scientist of Aviso, Inc - Interview Series

Ranked as one of India’s 10 top data scientists by Analytics India Magazine,  Joy Mustafi has led data science research at tech giants including Salesforce, Microsoft, and IBM, winning 50 patents and authoring over 25 publications on AI.

He was associated with IBM for a decade as Data Scientist involved in a variety of business intelligence solutions, including IBM Watson. He worked as Principal Applied Scientist at Microsoft, responsible for AI research. Most recently, Mustafi was the Principal Researcher for Salesforce’s Einstein platform.

Mustafi is also the Founder and President of MUST Research, a non-profit organization promoting excellence in the fields of data science, cognitive computing, artificial intelligence, machine learning, and advanced analytics for the benefit of society.

Recently Mustafi joined Redwood City-based Aviso, Inc as Chief data scientist, where he will leverage his decades of experience to help Aviso customers accelerate deal-closing and expand revenue opportunities.

What initially attracted you to AI?

I love mathematics a lot, and the same for programming. I did my graduate degree in statistics and post-graduate work in computer applications. When I started my AI research journey back in 2002 at the at Indian Statistical Institute in Kolkata, I used the C programming language to develop an Artificial Neural Network system for handwritten numeral recognition. That was 2500+ lines of code, all written from scratch without any inbuilt libraries apart from standard input / output. It consisted of data cleansing and pre-processing, feature engineering, and a back propagation algorithm with a multilayer perceptron. The entire process was a combination of all the subjects that I studied. At that time AI was not so popular in the corporate world, and few academic organisations were doing advanced research in the field. And, by the way, AI wasn’t new at the time! The field of AI research dates all the way back to 1956, when Prof. John McCarthy and others inaugurated the field at a now-legendary workshop at Dartmouth College.

 

You have worked with some of the most advanced companies in AI such as IBM Watson & Microsoft. What has been the most interesting project that you have worked on? 

I want to mention the first patent I was awarded while working at IBM: a  method for solving word problems in natural language, which was an open problem with IBM Watson. The system I developed can understand an arithmetic or algebraic problem stated in natural language and provide a solution in real-time as a natural language answer. To do that, the system had to handle the following key steps: Get the input problem statements and question to be answered; convert the input sentences to a sequence of sentences which are well-formed from a mathematical perspective; convert the well-formed sentences into mathematical equations; solve the set of equations; and narrate the mathematical result in natural language.

There’s also my best project for Microsoft — Softie! I invented and built a physical robot equipped with various types of interchangeable input devices and sensors to allow it to receive information from humans.  A standardized method of communication with the computer allowed the user to make practical adjustments, enabling richer interactions depending on the context. We were able to implement a robust system with features including a keyboard, pointing device, touchscreen, computer vision, speech recognition, and so forth. We formed a team from various business units, and encouraged them to explore research applications on artificial intelligence and related fields.

 

You’re also the Founder and President of MUST Research, a non-profit organization registered under Society and Trust Act of India. Could you tell us about this non-profit?

MUST Research is dedicated to promoting excellence and competence in the fields of data science, cognitive computing, artificial intelligence, machine learning, and advanced analytics for the benefit of the society. MUST aims to build an ecosystem to enable interaction between academia and enterprise, helping them to resolve problems and making them aware of the latest developments in the cognitive era to provide solutions, offer guidance or training, organize lectures, seminars and workshops, and collaborate on scientific programs and societal missions. The most exciting feature of MUST is its fundamental research on cutting-edge technologies like artificial intelligence, machine learning, natural language processing, text analytics, image processing, computer vision, audio signal processing, speech technology, embedded systems, robotics, etc.

 

What was it that inspired you to launch MUST Research?

My love of sci-fi movies and mathematics means I’m often thinking about how technology can change the world, and I’d been thinking about forming a group of like-minded experts on advanced technologies since 1993, when I was in 9th grade. Once I got my first job, it took 10 years to call for a meeting — and another 10 years to identify a group of suitable experts and form a non-profit society. Now, though, we have around 500 data scientists in MUST across India who are passionately contributing to research on emerging technologies.

 

Over the past several years the industry has been significant advances in deep learning, reinforcement learning, natural language processing, etc. Which area of machine learning do you currently view as the most exciting?

All machine-learning algorithms are exciting once they are implemented as a product or service that can be used by businesses or individuals in the real world. The Deep Learning era has pros and cons, though — sometimes it helps in automatic feature engineering, but at the same time it can work like a black box, and end up with a garbage-in-garbage-out scenario if proper datasets or algorithms aren’t used. Some of the latest technologies are also resource-hungry and require huge amounts of processing power, time, and data. The key thing to remember is that Deep Learning is a subset of Machine Learning (ML), which in turn is a subset of Artificial Intelligence (AI), and AI is a subset of Data Science — so it’s all connected. And it’s not about Python, R or Scala — I started my AI journey in C, and one can even write AI programs in assembly language code. Building successful AI systems depends first and foremost on understanding the business or research environment, and then connecting the dots between actions and data to build a system which genuinely helps various people in different domains. Whether you’re working with  Natural Language Processing, Computer Vision, Video Analytics, Speech Technology, or Robotics, the best way forwards is to start with the simplest possible approach, and then adopt more complex methods iteratively as you experiment with and refine your system.

 

You are a frequent guest speaker at leading universities in India. What is one question that you often hear from students, and how do you best answer it?

The single question I hear most often is: “How can I become a data scientist?”  I always tell young people that it’s definitely possible, and try to guide them towards using their love of mathematics, statistics, or computer science to try to solve real-world business problems. People also  ask how they can join MUST, and again, the answer  is simple: “Build your profile with multiple projects and focus on thinking outside of the box.” If you want to become a data scientist, you have to also prove that you can innovate. Without innovation, we can’t call ourselves scientists. Of course, being awarded patents or publishing your research in reputed journals and conferences also helps!

 

You recently joined Redwood City-based Aviso as chief scientist, in order to use your AI/ML expertise. Could you tell us a bit about Aviso and your role with this company?

Aviso uses AI and machine learning to guide sales executives and take the guesswork out of the deal-making process. That’s a fascinating challenge, and my primary responsibility is to help the organization grow in a positive direction, using deep research to set the stage for the customers’ success. I’m using my knowledge and experience in artificial intelligence and innovation to help make our core products and research projects more:

Adaptive: They must learn as information changes, and as goals and requirements evolve. They must resolve ambiguity and tolerate unpredictability. They must be engineered to feed on dynamic data in real time.

Interactive: They must interact easily with users so that those users can define their needs comfortably. They must interact with other processors, devices, services, as well as with people.

Iterative and Stateful: They must aid in defining a problem by asking questions or finding additional source input if a problem statement is ambiguous or incomplete. They must remember previous interactions in a process and return information that is suitable for the specific application at that point in time.

Contextual: They must understand, identify, and extract contextual elements such as meaning, syntax, time, location, appropriate domain, regulation, user profile, process, task and goal. They must draw on multiple sources of information, including both structured and unstructured digital information.

 

What was it that attracted you to this position with Aviso?

Aviso is working to replace bloated legacy CRM systems with frictionless, AI-enabled tools that can deliver actionable insights and unlock sales teams’ full potential. Our product is a smart system which understands the pain points of salespeople, does away with time-consuming data entry, and gives executives the suggestions and guidance they need to close deals effectively. I was attracted to the strong leadership team and customer  base, but also to Aviso’s commitment to using sophisticated AI tools to solve real-world challenges. Selling is a vital part of any business, and Aviso helps with that by leveraging the power of artificial intelligence. Bulls-eye! What more could you want?

 

Lastly, is there anything else that you would like to share about AI?

Artificial intelligence makes a new class of problems computable. To respond to the fluid nature of users understanding of their problems, the cognitive computing system offers a synthesis not just of information sources but of influences, contexts, and insights. These systems differ from current computing applications in that they move beyond tabulating and calculating based on pre-configured rules and programs. They can infer and even reason based on broad objectives. In this sense, cognitive computing is a new type of computing with the goal of developing more accurate models of how the human brain or mind senses, reasons, and responds to stimulus. It is a field of study which studies how to create computers and computer software that are capable of intelligent behavior. This field is interdisciplinary: artificial intelligence is a place where a number of sciences and professions converge, including computer science, electronics, mathematics, statistics, psychology, linguistics, philosophy, neuroscience, and biology. That’s what makes it so exciting!

Spread the love

Antoine Tardif is the CEO of BlockVentures.com, and has invested in over 50 AI & blockchain projects. He is also the Co-Founder of Securities.io a news website focusing on digital securities, and is a founding partner of unite.ai

Big Data

Garth Rose, CEO of GenRocket, Inc – Interview Series

mm

Published

on

Garth Rose, CEO of GenRocket, Inc - Interview Series

Garth is the Co-Founder & CEO of GenRocket. He is an expert at launching and building technology startups. He has held numerous senior leadership roles in startups over the past 25 years including President & CEO of Concentric Visions (VC backed + acquired), VP Sales & VP Business Development at Indus River Networks (VC backed + acquired), VP Sales & Marketing at Digital Products (acquired) and National Sales Manager at Leading Edge Products.

In 2012 you Co-founded GenRocket a company that specializes in enterprise test data automation.  What was the initial vision that inspired this?

I met GenRocket Co-Founder Hycel Taylor in 2011 and he educated me about the need for accurate, conditioned test data for effective software testing. Hycel had done a lot of research and found a huge gap when it came to test data solutions. Hycel decided to architect his own platform that was low cost, really fast and flexible.

What are some of the benefits of using test data versus production data?

Proper software testing means not just testing “positive” conditions of an application but also testing “negative” conditions as well as permutations and edge cases. Production data is useful for data analytics but has limitations for many test cases. One of our financial services customers shared that their production data can only fully satisfy 33% of their testing requirements.

The speed of data generation is important, what are the speeds that GenRocket can deliver?

For a typical automated test case we deliver test data in about 100 milliseconds. For volume data GenRocket generates at a rate of about 10,000 rows of data per second. For big data applications we can use multiple GenRocket instances in parallel to generate millions to billions of rows of data in minutes.

There’s always a learning curve when it comes to generating both test and production data. Do you offer any type of user training?

GenRocket University was created in 2017 to educate our customers and channel partners on GenRocket. We offer multiple on-line training courses at no cost including our “GenRocket Certified Engineer” training course.

You currently serve enterprise customers in over 10 verticals. What are these different types of enterprise customers?

Major banks, numerous global financial services companies, major U.S. healthcare providers, major manufacturers, global supply chain firms, data information services firms are some of our customers across the world.

Our most active industry verticals are banking, financial services, insurance, healthcare and manufacturing.

How does GenRocket differ from other Test Data Management tools?

Traditional Test Data Management (TDM) solutions copy, mask and refresh production data. These solutions tend to be expensive and complex and production data also has limitations for software testing. GenRocket flips the TDM paradigm by quickly and accurately generating most of the required data and querying the small amount of production data that is needed for some of the tests. The GenRocket Test Data Automation (TDA) approach is faster, lower cost and easier to implement and use than TDM.

Could you tell us a little bit about the ability for test data framework compatibility?

Every organization has their own testing framework or testing tools so GenRocket has the flexibility to integrate into every customer’s environment. GenRocket can integrate with just about any testing framework in any language and any testing tool like Jenkins or Selenium. GenRocket can also insert data into any database, and can send data over web services. GenRocket also offers integration with Salesforce and can support complex data feeds like NACHA in banking and EDI and HL7 for the health care industry.

Is there anything else that you would like to tell us about GenRocket?

We rely on an extensive network of trained channel partners to introduce and deliver GenRocket test data solutions into our global customers. Partners like Cognizant, HCL, Wipro, Hexaware, Mindtree and UST Global are actively working with GenRocket.

To learn more visit GenRocket.

Spread the love
Continue Reading

Big Data

Ricky Costa, CEO of Quantum Stat – Interview Series

mm

Published

on

Ricky Costa, CEO of Quantum Stat - Interview Series

Ricky Costa is the CEO of Quantum Stat a company that offers business solutions for NLP and AI Initiatives

What initially got you interested in artificial intelligence?

Randomness. I was reading a book on probability when I came across a famous theorem. At the time, I naively wondered if I could apply this theorem into a natural language problem I was attempting to solve at work. As it turns out, the algorithm already existed unbeknownst to me, it was called the Naïve Bayes, a very famous and simple generative model used in classical machine learning. That theorem was Bayes theorem. I felt this coincidence was a clue, and planted a seed of curiosity to keep learning more.

 

You’re the CEO of Quantum Stat a company which offers solutions for Natural Language Processing. How did you find yourself in this position?

When there’s a revolution in a new technology some companies are most hesitant than others when facing the unknown. I started my company because pursuing the unknown is fun to me.  I also felt it was the right time to venture into the field of NLP given all of the amazing research that has arrived in the past 2 years. The NLP community has the capacity now to achieve a lot more with a lot less given the advent of new NLP techniques that require less data to scale performance.

 

For readers who may not be familiar with this field, could you share with us what Natural Language Processing does?

NLP is a subfield of AI and analytics that attempts to understand natural language in text, speech or multi-modal learning (text and images/video) and computing it to the point where you are driving insight and/or providing a valuable service. Value can arrive from several angles, from information retrieval in a company’s internal file system, to classifying sentiment in the news, or a GPT-2 twitter bot that helps with your social media marketing (like the one we built couple of weeks ago).

 

You have a Bachelor of Arts from Hunter College in Experimental Psychology. Do you feel that understanding the human brain and human psychology is an asset when it comes to understanding and expanding the field of Natural Language Processing?

This is contrarian, but unfortunately, no. The analogy of neurons and deep neural networks is simply for illustration and instilling intuition. One can probably learn a lot more from complexity science and engineering. The difficulty with understanding how the brain works is that we are dealing with a complex system. “Intelligence” is an emergent phenomenon from the brain’s complexity interacting with its environment, and very difficult to pin down. Psychology and other social sciences, which are dependent on “reductionism” (top-down) don’t work under this complex paradigm. Here’s the intuition: imagine someone attempting to reduce the Beatle’s song “Let it Be” to the C Major scale. There’s nothing about that scale that predicts “Let it Be” will emerge from it. The same follows with someone attempting to reduce behavior to neural activity in the brain.

 

Could you share with us why Big Data is so important when it comes to Deep Learning and more specifically Natural Language Processing?

As it stands, because deep learning models interpolate data, the more data you feed into the model the less edge cases it will see when making an inference in the wild. This architecture “incentivizes” large datasets to be computed by models in order to increase accuracy of output. However, if we want to achieve more intelligent behavior by AI models, we need to look beyond how much data we have and more towards how we can improve the ability of model’s ability to reason more efficiently, which intuitively, shouldn’t require lots of data. From a complexity perspective, the cellular automata experiments conducted in the past century by physicists John von Neumann and Stephen Wolfram show that complexity can emerge from simple initial conditions and rules. What these conditions/rules should be with regards to AI, is what everyone’s hunting.

 

You recently launched the ‘Big Bad NLP Database’. What is this database and why does it matter to those in the AI industry?

This database was created for NLP developers to have a seamless access to all the pertinent datasets in the industry. This database helps to index datasets which has a nice secondary effect of being able to be queried by users. Preprocessing data takes the majority of time in the deployment pipeline, and this database attempts to mitigate this problem as much as possible. In addition, it’s a free platform for anyone regardless of whether you are an academic researcher, practitioner, or independent AI guru that wants to get up to speed with NLP data. Link

 

Quantum Stat currently offers end-end solutions. What are some of these solutions?

We help companies facilitate their NLP modeling pipeline by offering development at any stage. We can cover a wide range of services from data cleaning in the preprocessing stage all the way up to model server deployment in production (these services are also highlighted on our homepage). Not all AI projects come to fruition due to the unknown nature of how your specific data/project architecture works with a state-of-the-art model. Given this uncertainty, our services give companies a chance to iterate on their project at the fraction of cost of hiring a full-time ML engineer.

 

What recent advancement in AI do you find the most interesting?

The most important advancement of late is the transformer model, you may have heard of it: BERT, RoBERTa, ALBERT, T5 and so on. These transformer models are very appealing because they allow the researcher to achieve state-of-the-art performance with a smaller datasets. Prior to transformers, a developer would require a very large dataset to train a model from scratch. Since these transformers come pretrained on billions of words, it allows for faster iteration of AI projects and it’s what we are mostly involved with at the moment.

 

Is there anything else that you would like to share about Quantum Stat?

We are working on a new project dealing with financial market sentiment analysis that will be released soon. We have leveraged multiple transformers to give unprecedented insight to how financial news unfolds in real-time. Stay tuned!

To learn more visit Quantum Stat or read our article on the Big Bad NLP Database.

Spread the love
Continue Reading

Interviews

Deniz Kalaslioglu, Co-Founder & CTO of Soar Robotics – Interview Series

mm

Published

on

Deniz Kalaslioglu, Co-Founder & CTO of Soar Robotics - Interview Series

Deniz Kalaslioglu is the Co-Founder & CTO of Soar Robotics a cloud-connected Robotic Intelligence platform for drones.

You have over 7 years of experience in operating AI-back autonomous drones. Could you share with us some of the highlights throughout your career?

Back in 2012, drones were mostly perceived as military tools by the majority. On the other hand, the improvements in mobile processors, sensors and battery technology had already started creating opportunities for consumer drones to become mainstream. A handful of companies were trying to make this happen, and it became obvious to me that if correct research and development steps were taken, these toys could soon become irreplaceable tools that help many industries thrive.

I participated exclusively in R&D teams throughout my career, in automotive and RF design. I founded a drone service provider startup in 2013, where I had the chance to observe many of the shortcomings of human-operated drones, as well as their potential benefits for industries. I’ve led two research efforts in a timespan of 1.5 years, where we addressed the problem of autonomous outdoor and indoor flight.

Precision landing and autonomous charging was another issue that I have tackled later on. Solving these issues meant fully-autonomous operation with minimal human intervention throughout the operation cycle. At the time, solving the problem of fully-autonomous operation was huge and it enabled us to create intelligent systems that don’t need any human operator to execute flights; which resulted in safer, cost-effective and efficient flights. The “AI” part came into play later on in 2015, where deep learning algorithms could be effectively used to solve problems that were previously solved through classical computer vision and/or learning methods. We leveraged robotics to enable fully-autonomous flights and deep learning to transform raw data into actionable intelligence.

 

What inspired you to launch Soar Robotics?

Drones lack sufficient autonomy and intelligence features to become the next revolutionary tools for humans. They become inefficient and primitive tools in the hands of a human operator, both in terms of flight and post-operation data handling. Besides, these robots have very little access to real-time and long-term robotic intelligence that they can consume to become smarter.

As a result of my experience in this field, I have come to an understanding that the current commercial robotics paradigm is inefficient which is limiting the growth of many industries. I co-founded Soar Robotics to tackle some very difficult engineering challenges to make intelligent aerial operations a reality, which in turn will provide high-quality and cost-efficient solutions for many industries.

 

Soar Robotics provides a fully autonomous cloud connected robotics intelligence platform for drones. What are the types of applications that are best served by these drones?

Our cloud-connected robotics intelligence platform is designed as a modular system that can serve almost any application by utilizing the specific functionalities implemented within the cloud. Some industries such as security, solar energy, construction, and agriculture are currently in immediate need of this technology.

  • Surveillance of a perimeter for security,
  • Inspection and analysis of thermal and visible faults in solar energy,
  • Progress tracking and management in construction and agriculture

These are the main applications with the highest beneficial impact that we focus on.

 

For a farmer who wishes to use this technology, what are some of use cases that will benefit them versus traditional human-operated drones?

As with all our applications, we also provide end-to-end service for precision agriculture. Currently, the drone workflow in almost any industry is as follows:

  • the operator carries the drone and its accessories to the field,
  • the operator creates a flight plan,
  • the operator turns on the drone, uploads the flight plan for the specific task in hand,
  • drone arms and executes the planned mission and return to its takeoff coordinates, drone lands,
  • the operator turns off the drone,
  • the operator shares the data with the client (or the related department if hired in-house),
  • the data is processed accurately to become actionable insights for the specific industry.

It is crucial to point out that this workflow is proven to be very inefficient, especially in sectors such as solar energy, agriculture and construction where collecting periodic and objective aerial data for vast lands is essential. A farmer who uses our technology is able to get measurable, actionable and accurate insights on:

  • plant health and rigor,
  • nitrogen intake of the soil,
  • optimization and effectiveness of irrigation methods
  • early detection of disease and pest

Without having to go through all the hassle mentioned above, without even clicking a button every time. I firmly believe that enabling drones with autonomous features and cloud intelligence will provide considerable savings in terms of time, labor and money.

 

How will the drones be used for solar farm operators?

We handle almost everything that needs counting and measuring in all stages of the solar project. In the pre-construction and planning period, we generate topographic model, hydrologic analysis and obstacle analysis with high geographical precision and accuracy. During the construction period, we generate daily maps and videos of the site. After processing the collected media we measure the progress of the piling structures’, the mounting racks’ and the photovoltaic panels’ installations, position, area and volume measurements of trenches and inverter foundations as well as counting the construction machinery/vehicles and personnel on the site.

When the construction is over, and the solar site is fully operational Soar’s autonomous system continues its daily flights but this time generating thermal maps and videos along with visible spectrum maps and videos. From thermal data, Soar’s algorithms detect cell, multi-cell, diode, string, combiner and inverter level defects. From visible spectrum data, Soar’s algorithms detect shattering, soiling, shadowing, vegetation and missing panels. As a result, Soar’s software generates a detailed report of the detected faults and marks them on the as-built and RGB map of the site down to cell level, as well as showing all detected errors on a table; indicating string, row and module numbers with geolocations. Also clients’ total loss due to the inefficiencies caused by these faults and prioritize each depending on their importance and urgency.

 

In July 2019 Soar Robotics joined NVIDIA’s Inception Program which is an exclusive program for AI startups. How has this experience influenced you personally and how Soar Robotics is managed?

Throughout the months, this was proven to be an extremely beneficial program for us. We had already been using NVIDIA products both for onboard computation as well as the cloud side. This program has a lot of perks that streamlined our research, development and test processes.

 

Soar Robotics will be generating recurring revenue with Robotics-as-a-Service (RaaS) model. What is this model exactly and how does it differ from SaaS?

It possesses many similarities with SaaS in terms of its application and effects to our business model. RaaS model is especially critical since hardware is involved; most of our clients don’t want to own the hardware and only interested in the results. Cloud software and the new generations of robotics hardware blend together more and more each day.

This results in some fundamental changes in industrial robotics which used to be about stationary robots with repetitive tasks that didn’t need much of an intelligence. Operating under this mindset we provide our clients’ with robot connectivity and cloud robotics services to augment what their hardware would normally be capable of achieving.

Therefore Robotics-as-a-Service encapsulates all hardware and software tools that we utilize to create domain-specific robots for our clients’ purpose in the form of drones, communications hardware and cloud intelligence.

 

What are your predictions for drone technology in the coming decade?

Drones have clearly proven their value for enterprises, and the usage will only continue to increase. We have witnessed many businesses trying to integrate drones into their workflows, with only a few of them achieving great ROIs and most of them failing due to the inefficient nature of current commercial drone applications. Since the drone industry hype began to fade, we have seen a rapid consolidation in the market, especially in the last couple of years.  I believe that this was a necessary step for the industry, which opened the path to real productivity and better opportunities for products and services that are actually beneficial for enterprises. The addressable market that the commercial drones will create until 2025 is expected to exceed $100B, which in my opinion is a fairly modest estimation.

 

  • We will see an exponential rise in “Beyond Visual Line of Sight” flights, which will be the enabling factor for many use cases of commercial UAVs.
  • The advancements in battery technology such as hydrogen fuel cells will extend the flight times by at least an order of magnitude, which will also be a driving factor for many novel use cases.
  • Drone-in-a-box systems are still perceived as somehow experimental, but we will definitely see this technology become ubiquitous in the next decade.
  • There have been ongoing tests that are conducted by companies of various sizes in urban air mobility market, which could be broken down into roughly three segments, namely last-mile delivery, aerial public transport and aerial personal transport. The commercialization of these segments will definitely happen in the coming decade.

 

Is there anything else that you would like to share about Soar Robotics?

We believe that the feasibility and commercialization of autonomous aerial operations mainly depend on solving the problem of aerial vehicle connectivity. For drones to be able to operate Beyond Visual Line of Sight (BVLOS) they need seamless coverage, real-time high throughput data transmission, command and control, identification, and regulation. Although there have been some successful attempts to leverage current mobile networks as a communications method, these networks have many shortcomings and are far from becoming the go-to solution for aerial vehicles.

We have been developing a connectivity hardware and software stack that have the capability of forming ad hoc drone networks. We expect that these networking capabilities will enable seamless, safe and intelligent operations for any type of autonomous aerial vehicle. We are rolling the alpha and beta releases of the hardware in the coming months to test our products with larger user bases under various usage conditions and start forming these ad-hoc networks to serve many industries.

To learn more visit Soar Robotics or to invest in this company visit the Crowdfunding Page on Republic.

Spread the love
Continue Reading