Maps, the crucial piece of autonomous puzzle, and our investment in Mapillary

mapillary_logo_white@2x.png

High definition maps play a key role for autonomous driving, yet there are many challenges to tackle. Our investment in Mapillary, an independent provider of street-level imagery and map data, can help address these and make autonomous driving a reality.

Why are maps important?

 World map by Gerard van Shagen, Amsterdam, 1689

World map by Gerard van Shagen, Amsterdam, 1689

Maps have played a key role for mobility and trade throughout the history. In ancient Mesopotamia, Babylonians engraved a map on clay tablets to find their way around the holy city of Nippur. Anaximander created a map helping Greek trade ships navigate their way through the Aegean sea. Yet, there is a flurry of recent news on maps... A coalition of carmakers (incl. our parent company), later joined by suppliers all around the world, acquired HERE for several billions dollars. Uber is investing $500M for its global mapping project. Google has acquired Waze for over a billion dollars and has been investing heavily through Waymo and Google Maps. Softbank led a $164M funding round for Mapbox very recently.

Maps, specifically high definition (HD) maps, are at the heart of the autonomous vehicle (AV) ecosystem for a few reasons. First of all, AVs need detailed maps for accurate and precise localization. Today’s GPS technology is far from being perfect, not to mention the issues with the “urban canyon” environment (Yes, this is why your uber shows up a few blocks away). This is especially true under challenging weather conditions, where driving without HD maps is compared to “putting a blind person behind the wheel” by industry analysts. Thanks to HD maps, AVs can match road furnitures and triangulate their position to get centimeter-level accuracy. Another important reason for having detailed maps is sensor redundancy. AVs are equipped with many sophisticated sensors (LiDAR, camera, radar, etc.) and some of these sensors can collect millions of laser points per second resulting in hundreds of GBs of data per hour. Maps can provide foresight, which can lower the workload on sensors and processors. Finally, maps can help AVs see what’s around the corner (e.g. latest accident or construction data) and further down the road. The sensors we’ve mentioned have a limited range- few hundred meters under even the best weather conditions. This means a handful number of seconds if you are driving 70 mph. Maps can integrate real-time traffic information to make the ride safer and more comfortable.   

On another note, maps are really interesting from an investing perspective, as map data could be created and consumed by AVs through a common cloud-based platform. This could create a virtuous cycle allowing the solution to get even better and more defensible over time with usage. As highlighted by another investor, “maps have network effects”.

What are the key challenges?

However, creating a detailed map with the precise location of every road furniture, incl. traffic signs, lane markings, among others, and updating it near real time (According to TomTom, about 15% of the roads change every year!) is not a trivial task and there are many key challenges. First, there is a need for huge deployment and we’d need millions of cars on the streets sending their sensor data back to the cloud to create a detailed and up-to-date map. Another challenge is the inconsistency of road furnitures among different locations, which makes the automation of the process even more complicated. The location and structure of the signs change even state by state, e.g. hanging traffic lights are very common in many states, whereas we have poles here, in California. This might sound simple, but think about the scale of this problem if you are a carmaker shipping vehicles all around the world. An additional challenge would be the process of stitching these images/videos from a variety of sensors. Even the angle of the sensors impact the process, not to mention inconsistent capturing quality of a variety of sensors on different vehicles.

On the other hand, today’s traditional map companies are operating in silos, by running their own survey vehicles around the world to collect data and updating their maps accordingly, which is another manual and cumbersome process. This approach is very costly as these survey vehicles are equipped with expensive sensors and they can only cover a small area as this requires a lot of effort. As a result, many different companies cover the same limited areas by incurring huge costs, and no single player ends up having the most accurate data. This results in very slow update cycles, and most of data installed in the vehicles becomes outdated quickly because of this. Additionally, there are cases where certain companies (municipalities, among others) are willing to share map data they’ve collected and can’t find a platform for that.

Our investment in Mapillary

 Source: Mapillary AB

Source: Mapillary AB

As we were learning more about maps and their challenges, we’ve met Jan Erik Solem, the founder and CEO of Mapillary. Jan Erik has been active in computer vision space since the late 90s. His work on 3D reconstructions for his PhD turned into Polar Rose, a startup providing facial recognition solution running across mobile, cloud and desktop environments. Polar Rose was acquired by Apple in 2010 and the company’s technology helped power some of the latest face detection and recognition APIs and features in Apple products. Following the acquisition, Jan Erik ran a computer vision team at Apple before leaving to start Mapillary. Additionally, he has been a professor at Lund University, and written books about programming and computer vision (one of them still on the top lists after 5+ years).

Jan Erik started Mapillary late 2013 with Yubin Kuang (his former PhD student), Peter Neubauer, and Johan Gyllenspetz. Mapillary has an impressive team, including winners of the Marr Prize, one of the top honors for computer vision researchers. The team publishes their findings and openly shares their data and some of the code on Mapillary’s research website - https://research.mapillary.com/.

Jan Erik describes his vision at a blog post as follows:

“The core idea behind Mapillary is to combine people and organizations with very diverse motives and backgrounds into one solution and one collective photo repository, sharing in the open and helping each other. This means our awesome community, our partner companies and partner organizations, even our customers. That’s right, we’re incentivizing our customers to share their data into the same pool as everyone else, in the open, with an open license...

That we don’t have a mapping or navigation product means we can happily partner with any mapping or navigation company without being in competition. We’re neutral, open, and can work with anyone as long as it benefits our long term vision of visually mapping the planet. This means that photos and data from these partnerships will benefit OpenStreetMap too.”

 Source: Mapillary AB

Source: Mapillary AB

Mapillary is a street-level imagery platform for generating map data at scale through collaboration and computer vision. Mapillary’s technology allows users to upload pictures in a device-agnostic way and generate map data anywhere. Unlike many other players, Mapillary doesn’t need expensive survey vehicles with special gear to achieve quality data. Mapillary has been investing the resources into developing computer vision algorithms, which are capable of of handling data from such a range of different sensors and compensate for any shortcomings of consumer-grade devices. Additionally, it is estimated that there will be 45 billion cameras by 2022, which could fuel the data capturing process further.

Mapillary has a strong network of contributors who want to make the world accessible to everyone by creating a virtual representation of the world. Anybody can join the community and collect street level photos by using simple devices like smartphones or action cameras. This means individuals as well as companies, governments, and NGOs. Recently, facilitating Microsoft’s upload of millions of images quickly, Mapillary’s technology helped with disaster recovery for hurricanes in Florida and Houston. Currently, Mapillary has more than 260M images uploaded, 4M km mapped, 190 countries covered and 22B+ objects recognized with computer vision.

Last May, the company released the Mapillary Vistas Dataset, the world’s largest street-level imagery dataset for teaching machines to see. The dataset includes 25K high resolution images, 100 object categories with global geographic reach under highly variable weather conditions. For autonomous driving, this means that cars will be able to better recognize their surroundings in different street scenes, which in turn helps improve safety. By using this training data and developing new approaches in computer vision research, Mapillary has the best results in semantic segmentation of street scenes, based on two renowned benchmarks.

We believe that there is a strong data network effect for Mapillary’s business. More engagement from community results in more engagement from customers, who contribute images, too. This is further catalyzed by improving computer vision algorithms, developed by Mapillary’s high-caliber research team. As a result, community members and customers extract more value in return and this drives further growth.

Given the challenges we’ve highlighted above, we believe just like Mapillary that there is a need for an independent provider of street-level imagery and map data, which could act as a sharing platform among different players as well. We believe that sharing data is crucial for accurate maps and safer autonomous vehicles as no single player has the necessary deployment in place to ensure this. We are convinced that any company should have access to most accurate map data, and the safety of AV passengers shouldn’t be a differentiator. With the world-class team, passionate community, and unique capabilities, Mapillary is well positioned to address the challenges and help make AV a reality. We couldn’t be more excited to lead this round of investment, and join Atomico, Sequoia and LDV, along with the new investors Navinfo and Samsung to shape the future of mapping!

Baris is an engineer with work experiences in venture capital and top-tier investment banking. At BMW i Ventures, Baris's investing scope encompasses a variety of areas, including industry 4.0, autonomous driving, mobility, AI, digital car/ cloud, customer digital life and energy services. Baris holds a Master of Engineering and Technology Management from Duke University along with an MBA (Dean's Fellow) from UNC Kenan-Flagler Business School, where he led VCIC, the world's largest venture capital competition. Please feel free to reach out on baris@bmwiventures.com.

Ridecell wins SXSW Interactive Innovation Award

 Read more on  Globalfleet

Read more on Globalfleet

Ridecell, a platform developer for car-sharing and ride-sharing operators, has won the Interactive Innovation Award at South by Southwest (SXSW) in Austin, Texas. The San Francisco-based company received the accolade for its Autonomous Operations Platform. According to the judges, Ridecell was “the most exciting tech development among New Economy finalists”. 

“While many innovators, including our own Auro brand, are teaching vehicles how to drive, the Ridecell Autonomous Operations Platform teaches fleets of driverless vehicles how to maintain themselves and manage their own emergency situations”, explained Aarjav Trivedi, Ridecell CEO.

Ridecell's Autonomous Operations Platform enables on-demand ride-hailing and automates operations for autonomous-vehicle fleets. It handles rider on-boarding, routine vehicle deployment, user accounts and response to emergency situations, as well as both scheduled and unscheduled refuelling, maintenance, cleaning and roadside assistance stops. 

BMW i Ventures invests in Blackmore Sensors and Analytics

Mountain View, CA – March 20, 2018… BMW i Ventures has announced an investment in Blackmore Sensors and Analytics, Inc., a leading developer of frequency-modulated continuous wave (FMCW) lidar for the automotive industry.

“Advances in new sensor technologies, like lidar, are going to make cars safer and, eventually, autonomous,” said BMW i Ventures partner Zach Barasz.  “Blackmore has unique and innovative FMCW lidar technology that delivers a new dimension of data to future vehicles.”

Low-cost lidar sensors are required to enable self-driving vehicles and, in addition to being more cost-effective, Blackmore’s FMCW lidar technology has several competitive advantages over traditional pulsed lidar systems that enable autonomous driving teams to achieve their goals faster. 

“Having the ability to measure both the speed and the distance to any object gives self-driving systems more information to navigate safely,” said Dr. Randy Reibel, Blackmore’s CEO. 

Blackmore will use the investment to scale the production of its FMCW lidar sensor for advanced driver assistance systems (ADAS) and self-driving markets. Increased production capacity will allow Blackmore to support the growing sector of autonomous driving teams demanding a superior lidar solution.

BMW and Toyota are investing in a start-up that makes self-driving shuttles

Read more about it on CNBC

Founded in 2017 by auto engineers Edwin Olson, Alisyn Malek and Steve Vozar, the company recently raied $11.5 million in seed funding from the BMW i Ventures and Toyota, and major early-stage firms including Maven Ventures and Y Combinator.

Unlike, Alphabet's WaymoGM-owned Cruise or Tesla, which are all working on level-5 autonomous vehicles, May Mobility's electric shuttles were designed to move along short routes that have been mapped inside of a 10-square-mile footprint.

"We're seeing a lot of interest from municipalities and real estate developers," said COO Malek, formerly of GM. "We expect autonomous vehicles to inspire a lot more public-private partnerships in transportation."

BMW i Ventures' Uwe Higgen, an investor in May Mobility's seed round, thinks it makes sense for the company to deploy its shuttles around long-term parking lots at airports, or on big corporate campuses to move workers around.

Venture Capital Perspective: The Three Rules of AI Investing

1*sF1UB3eeI_aN8JXZcAuvAA.jpg

A funny thing I’ve heard on more than one occasion was that accelerators were telling their startups to use AI buzz words like deep learning and machine learning in their pitches to investors. The idea, it goes, is that the unsophisticated investor, so impressed by these terms, would come knocking on the door with a term sheet, willing to shell out money to founders at any price.

While I exaggerate slightly here, the general notion holds true: with all this AI hype, it’s easy to get lost in this convoluted maze where AI is everywhere and everything. Accordingly, I’ve vowed not to be duped by anyone spouting the magical language of “AI.” Motivated partly out of intellectual curiosity but more likely out of chronic insecurity, I’ve developed three main rules to keep in mind when investing in AI. Today I share that with you:

1. Applied AI companies are where the party’s at. I don’t care about your next-gen neural net.

You can divide AI companies roughly into two camps: the general-purpose AI tech company and the applied-AI company. A general-purpose AI company is one that provides a general AI technology (e.g. neural nets or some other new deep learning algorithm) that can be generalized across all fields of study and must be trained for implementation into an actual product. These are companies that tend to have great technology but often lack a specific application for their technology. By contrast, an applied-AI company is one that uses AI technology as a means to serve a specific business or product problem. These are companies that have seen a need in the market, such as autonomous driving, and are using AI techniques to answer that need.

As a general rule of thumb, I only look at applied AI companies. These are the only ones that I consider “venturable” for a couple of reasons:

1. A scalable business model: In comparison with applied AI companies, general-purpose AI technology companies tend not to have a product they can sell and make money in a scalable way. No potential customer is going to license such a company’s algorithms if they have not been trained to meet a specific business need. With such a company, what you have in the end is a bunch of code that still needs a lot of customization, and this is not the type of scalable product that makes a good venture case.

1*dBylqCM7qCmOnm87_D1SJA.png

2. More favorable exit prospects: Acquisition opportunities for general-purpose AI technology companies are small and will get smaller as corporations build out their AI divisions. In the beginning of the AI hype curve, we may have seen a few big exits of general-purpose AI companies. The most notable was Google’s $400M acquisition of the neural-net company DeepMind, which ultimately became Google’s deep learning R&D outfit. Such early acquisitions were effectively R&D investments or acquihires. As big corporates implement their AI strategies, they will become less likely to pay big dollars for such expertise alone; they will instead seek to acquire applied AI companies that they can integrate into their existing products, leaving acquisitions of general-purpose AI technology companies to the wayside.

That said, general-purpose AI companies may still be a good bet if you invest early, such as at a seed stage. In these cases, an acquihire is the most likely exit event for such companies, so you better make sure you invest in a team with a lot of PhD’s. Acquihires tend to be smaller exits, so making a very early investment likely would be necessary to see any return at the end.

So remember: it’s all about applied AI!

2. All that matters is if the darned thing works

Ok, so now that we have this great applied AI company, how do we know if it’s the best in its field? Some people will say it’s the algorithms! Others will say it’s the data! Both are wrong. A major conundrum in AI is that the most capable of such technologies — namely deep neural nets — are effective black boxes, so convoluted and unintelligible that mere mortals with decades of related experience cannot understand them. Accordingly, in the end, the only way to see if any such technologies comes down to one thing: how they perform in the real world.

So remember: in AI, a crappy algorithm trained on good data can beat out a super algorithm trained on bad data. What truly matters is how the system as a whole performs.

1*6HKJmYZ4Fhd_A9VuVKPlqw.png

3. A tale of the lettuce head kingdom. Play in a market niche where you can win (through data).

A few weeks ago, I was at a talk by AI guru Andrew Ng who told a story of how a former group of his students built their company. One day, these students, a group of Stanford engineers, ended up at some farm in Northern California and began taking pictures with their cellphones of heads of lettuce. One head there. Snap! One head here. Snap! Another head over there. Snap! Over time, the students amassed the greatest collection of lettuce-head images in the world. They founded the company, Blue River Technology, which manufactured agricultural robots using high-precision herbicide spray technology trained on their lettuce head images. The robots eliminated 90% of the herbicide use on farms, saving farmers money and fortifying against resistant weed growth. In October 2017, less than a decade later, John Deere bought Blue River for $305 million.

The reason behind Blue River’s success can be distilled into a matter of AI strategy: It built an empire in a specific market niche where it acquired more relevant data than any of its competitors to train its AI systems in the most meaningful way. In this brave new world of AI, data means everything. To train any algorithm to solve a given problem, you need to train it with data, particularly high-quality data and lots of it. The more data you have, the more accurate your AI system can operate. What this means for companies is that data, and naturally, the strategic use of it, can mean all the difference in obtaining an edge over one’s competitors. In this manner, Blue River collected the biggest repository of lettuce-head images to train its robots to recognize the difference between a good and bad lettuce head, which enabled it to optimize its herbicide-spraying robot. With every lettuce head its robot encountered, its algorithms became further refined, more powerful and more accurate. Blue River’s lettuce-head treasure trove, and the AI system on top of which it was built, became its competitive moat. Once the company built out its database of lettuce-head images, it became difficult for any other competitors to catch up — both in terms of data and the efficacy of their AI systems trained on that data.

This strategy — building a data empire in a niche area — is the same reason why you should not expect to see any emerging upstarts topple the empires of AI monoliths like Facebook and Google any time soon. Google, for example, has collected over two decades of data about every click, view, and search of yours to deliver to your screens the most optimized search results in existence today. Such vast empires will endure, gluttonizing on their bacchian feasts of data to the extent no other competitors can catch up.

So remember: Lettuce-head empires pay off. Build your startup empire in a specific market niche that no other player has a data monopoly over. Then sit back and let the lettuce heads roll.

There will always be, I admit, exceptions to these rules. However, these lessons have served me well, and I hope they will serve you well too. Happy startup-ing.