• Emotion Notions from CES 2019

    | Jan 12, 2019
    This year it seems like emotion recognition was everywhere at CES, especially in the car. A huge deal was made about Kia’s emotional recognition announcement, but many other companies were showing off similar abilities. Qualcomm, Jungo, Valeo, KIA pod @ CES 2019 Nuance, Hyundai Mobis, and Toyota Boshoku all had technology that recognize facial expressions (sometimes augmented with IR heat maps and biometric data) to determine people’s emotional states. As a UX innovation leader, we’re always on the lookout for new technology and we’ve been experimenting and demoing emotion recognition technology ourselves. One thing we find ourselves asking with emotion recognition technology – as we do with all new technology – is what value does it bring to the user?

    We seem to be in the “inflated expectations” part of the hype cycle for emotion recognition and CES did not present a truly useful use case. Demos we saw tended to focus on recognizing “bad” emotions like fear, anger, sadness, and boredom, and then “correcting” these states. Much of the incentive for increasing the happiness quotient of a driver comes from naturalistic studies that show a crash risk of 10 times greater for agitated drivers than their emotionally neutral counterparts. Although not necessarily proven, boredom is similarly assumed to be associated with daydreaming or distraction and hence an expected increase in crash risk.

    Toyota Boshoku @ CES 2019Human emotion is very complicated and there are many deeply individual factors that alter someone’s mood. Perhaps people aren’t looking for their cars to tell them to breathe deeply or smooth over their relationship woes. Should we even assume that the car can bear the responsibility or has the agency to alter people’s mood in the first place? Maybe it’s okay to have “socially unacceptable” emotions like anger, irritation, or sadness within the private confines of the vehicle – especially if we have ADAS features that can ensure safe driving at all times.

    Another thing to consider when evaluating the usefulness of emotion recognition and mitigation is whether there could be unintended consequences of trying to moderate people’s behavior. While we can’t foresee the full social impact, it’s easy to imagine worst-case scenarios. Cars that slow down and play soft music in an attempt to calm angry drivers may only make them more agitated, which could have an escalating effect as more and more cars slow down, creating collectively enraged passengers.

    So instead of trying to alter emotional states, maybe the right strategy is to complement them. When we’re angry, friends often share our anger, helping us to blow off steam. When we’re sad, good friends don’t invalidate our feelings and try to talk us out of being sad, they listen and commiserate. Perhaps we need an emotionally intelligent car that can pay attention to our behavior and, instead of reading our emotional state in an attempt to “manipulate” us, learns what we like to do when we’re sad, angry, happy, or bored, and supports us.

    The Lovot @ CES 2019This isn’t to say that emotion recognition and mood alteration is always bad; there may be times when it is good. An anonymous car that picks up a child would be even more invaluable if it could calm an upset or frightened child.

    This makes me think that the Lovot companion robot may actually have an auto use case after all. This cute, cuddly robot was another
    CES hit, one that learns owner behavior and bonds over time. Perhaps the best we can do with emotion recognition is to develop ways in which we create stronger bonds with our cars.


    Jacek Spiewla     Jacek Spiewla
    User-Experience Manager, Advanced Development

    Go comment!
  • Hunting for Car-able Tech at CES 2019

    | Jan 10, 2019

    Car companies didn't start out exhibiting at CES; they started coming to the show to scout for new, inventive consumer technology that might make its way into the car. Even though car companies have made CES their own over the last decade or so by plugging their own wares, CES is still a hotbed of consumer innovation. As an homage to the original auto intent, here are a couple things outside of automotive that are pretty cool – maybe even a little bit out there – that may one day make the car a better place.


    AudioScenic sound bar3D immersive sound. Demoed by students from the University of South Hampton, the AudioScenic project allows for a fully immersive 3D audio experience with an array of speakers in a single bar. You might argue that the car already has speakers surrounding the occupants, so why bother? Simply put – existing audio setups can’t fine-tune the sound to each ear. AudioScenic tracks the user’s head position and uses that to form independent audio beams to each ear, making for an amazingly lifelike audio experience. With this technology, you could personalize the audio experience for each passenger, letting everyone hear their own music – without needing earphones. Each person would hear sounds in an immersive 3D soundscape that would change as they turned their head, creating the illusion of being in a concert hall, a jungle, or any other sound-rich environment. While this could enable cinematic-like entertainment effects, it could also be used for a number of safety applications. Alerts positioned in 3D space would give a driver blind-spot (or other) warnings that would immediately cue them to the actual direction of the problem.


    Quiet On earbudsSnore prevention earphones. While we’re on the topic of audio, noise cancelling earphones from QuietOn were on display, designed to give people a quiet rest from their snoring bedmates. That got us thinking – if carmakers want to deliver personalized experiences for everyone in the car, it’s not too hard to imagine the family minivan coming with several sets of bluetooth earpieces. They could charge in special docks in the car and each earpiece would be linked to the car via individual profiles, letting each passenger travel in their own isolated bubble or letting audio pass through for conversations as desired. As an example, one person could restfully nap on a long autonomous car ride while the kids watch loud action movies.


    Watergen in the carWater extraction. A company called Watergen was demoing a small unit that quickly extracted clean, pure water out of the humidity in the air. While that might not work in Death Valley, it’d be perfect for nearly everywhere else that has at least 20 percent humidity in the air. This type of unit in a car could ensure users always have clean water at their disposal – no need to remember those pesky water bottles. Heck, we already have eight cup holders per car, why not have them automatically fill our water bottles too?


    LGE Rollable OLED Rollable display. The LG LOED TV R is a small discrete box that unfurls a 4K 65” display on command. It’s also got a “line” mode, where only a foot or so unrolls, creating a long, horizontal display ... like an instrument cluster maybe? This could enable the car to have a screen that could roll out at maximum height when watching movies is more important than looking out the windshield, yet roll back to “cluster height” when someone actually wants to drive. Windshield angle still needs to be solved, but sounds like a pretty cool application to us. 


    Massage chair. Massage chairs are all the rage at CES 2019, and even Lamborghini has gotten into the act with their own $30,000 branded massage chair. We understand that Lamborghini and autonomous don’t really belong in the same sentence, but wouldn’t a massage chair be the perfect accessory for an autonomous ride? You know the instant we have Level 5 cars on the streets, it won’t be long before there’s a massage chair bolted into one, so why not make it a luxury option? (A few automakers have massage seat options in top-end cars but nothing like this.) Maybe Lamborghini didn’t go far enough – just pop one of their full-body Shiatsu massage chairs into an Urus and we’ll be all set.


    Sure, some of this is out there – for now. But in five or 10 years, maybe it’ll all make sense. Just like your own in-car music seemed a bit loopy in the 60’s when Chrysler added a phonograph to the car.  Have you seen some emerging tech at CES?  Come see us at the show; we’d love to talk about how your favorite new product could change the automotive experience!


    Brian Debler     Brian Debler
    HMI Lead Engineer, Advanced Development

    Go comment!
  • Mind the Gaps – What's NOT at CES 2019

    | Jan 09, 2019


    If you're not here in person at the Consumer Electronics Show, you can get a glimpse of what you're missing at CNETWired, or the Verge. We thought we’d talk about something those guys aren’t – what's not being shown. We think this is just as telling and can help us all keep an eye on where we should be moving next.


    Let's start with C-V2X – otherwise known as Cellular Vehicle To Everything. While it's clear that Qualcomm and others are showcasing the technology, so far most of the demos look like standard V2X safety cases or maybe even an extension of cloud streaming. This seems like a lost opportunity to us. We're exploring what to do with the unique capabilities of C-V2X via the Network Platoon. Why create further isolation – over 76% of commuters drive alone – when you can help people feel more connected?


     were everywhere at CES this year: Kia, Nissan, Bosch, Panasonic, May Mobility –the list goes on. It's pretty well established that autonomous isn't all that new anymore. (If you’re not looking at both autonomous and electric, you are so 2017.) It's also abundantly clear that the autonomous car is going to be creating freedom for occupants. What’s unclear is the vision for this self-driving world. We’re seeing a lot of empty pods with full screens, presumably because all we want to do with our newly freed travel time is watch TV. Yawn. Audi at least suggests making the autonomous ride a fully cinematic experience with the car rockin-and-rolling to the rhythm of the in-car movie thanks to some smart air shocks.


    It’s great to take advantage of the car in ways that make it unique but what about use cases beyond entertainment? What about catching up on some sleep? Or giving everyone in the car their choice of what to do on a long drive? Maybe the driver could catch up on some sleep while one passenger watches a movie. Another passenger could call into a meeting and work on their laptop. Is anyone showing that off yet? It’s early in the autonomous revolution but as an industry we’ve got to think of the car as more than a travelling home theater. Infinitely configurable interiors have got to be next. Let’s personalize every seat in the pod.


    Emotion recognition is a nascent technology and lots of companies seem to be jumping on the bandwagon here at CES. However there aren’t too many compelling use cases yet and it feels like a technology invented without a reason. Let’s give it a reason! Can we use emotions to predict what music would be best in the car’s playlist? What about using it to recognize angry or depressed drivers who are more than 10 times the accident risk and having the ADAS-ECU take over the wheel? How about sensing when the driver is confused by the car’s navigation instructions and having the nav system clarify? These last two are something we’re testing out for OEM feedback. Any other good ideas for a truly empathetic car?


    Fully integrated product lines also made a splash today at LG’s booth. They enable a very Jetson-like experience when your mirror starts recommending outfits and your washing machine knows how to wash them. While this a cool concept - especially for those of us in high tech who rely on logo-wear to keep our wardrobes fresh - but does anyone ever only buy a single brand? When tech is moving this fast it usually doesn’t talk across companies – you either have to wait for one player to dominate and form a de facto standard, or for a bunch of competitors to find a reason to to standardize together. Otherwise, your LG appliance isn’t likely to talk to your Kenmore one. Likewise, your Amazon Alexa, Siri, and Google assistants aren’t going to be conversant with each other. We’re trying to solve that problem one piece at a time with our Virtual Assistant Router and our Mobile payment solution. Is anyone else looking at this problem?


    Hey we love tech too but let’s make sure we’re creating it to make better experiences for customers and everyone around them. How can tech not just make more “stuff to buy” but truly improve our lives?


    Gareth Williams     Gareth Williams
    Executive Director of Advanced Development

    Go comment!
  • From Solitary to Sociable: The Network Platoon

    | Dec 06, 2018
    According to the 2016 US Census, over 76% of commuters drive alone. Considering there is often quite a distance for many people between work and home, this adds up to a fair amount of time spent in solitary splendor. While some people enjoy the “me time” away from kids and coworkers, this isolation can also compound feelings of loneliness – a significant challenge faced by increasing numbers of people in the digital age.

    At Mitsubishi Electric, we’re working on both autonomous driving and cellular-based vehicle-to-everything networking technology (C-V2X), and got to thinking – could we somehow combine these enabling technologies to make people feel more connected? And that's how we invented the social network platoon.

    MEAASocialPlatooning2Imagine for a moment that you're on your way home from work in your self-driving car. You've got time to spare, so why not see what others around you are doing? You pull up the car's infotainment system and see who's opted in to a platoon. There's a car next to you with a woman acting as DJ, broadcasting her favorite tunes; you tap into her mix because the current song is one you really like. A person several cars ahead is leading a platoon around construction or slow moving traffic by using his autonomous drive display to indicate the best route for the car to follow. So you tell your car to join his platoon to take advantage of a little human guidance. Then you settle in to pick up a chat from last week with a couple that usually carpools in your direction at the same time. You can't see them, but you know they're a few cars behind so you send them a picture of the funny new billboard you don’t want them to miss.

    You could also broaden your social network by allowing your car to use the C-V2X backbone to create an ad-hoc social network. Instead of deepening isolation, your commute might allow you strike up friendships with new people – one-time strangers that you'd never otherwise know yet drive the same roads as you do everyday. Hear new music; get exposed to new interests – all with people inhabiting the same physical proximity as you.

    Maybe it's not new interests you're after. With our social network platoon, you could also find lots of like-minded people. For example, fans headed to the same concert or sports game that you are – people who share your love for the same songs, and gripe about the same rival teams.

    This future isn't that far away. We're bringing demos of this technology to CES to test out these concepts on real people. We think it'd be really cool to be able to "like" the people in the car next to yours, strike up conversations with nearby travelers, and listen to another’s favorite playlist. Ad hoc mobile social networks are like meeting new people at a bar and getting exposed to their new ideas. Which is far better than a solitary ride and something that may bring us all a bit closer together.

    Book an appointment with us at this year's TU-Automotive Detroit to see the many ways we're helping automakers rethink the in-car UX. 


    Mohammad Horani     Mohammad Horani
    Manager, Advanced Development

    Go comment!
  • One Assistant to Rule Them All

    | Nov 15, 2018
    Not all virtual assistants have been created equal. Some like Siri and Cortana are great at understanding human speech as well as sending texts and emails. Google Assistant is excellent at pulling up good search results and identifying songs. Alexa can set timers, integrate with home automation devices, and give news updates. Amy Ingram can schedule meetings like a pro. Fin can do a wide variety of tasks although it only integrates with Google calendar. The list goes on and will no doubt continue to expand. After all, the market for digital personal assistants is a lucrative one that is expected to reach more than $12 billion by 2021 with 1.8 billion active users.

    Capitalizing on this popular trend by incorporating it into cars seems like a good idea for automakers. However, I see a few challenges.
    • Deciding on a single assistant. Having choice in virtual assistants is great but the drawback to the growing number of them is that no one assistant offers everything in one convenient package. You’ll need to work with quite a number of assistants to offer anything close to a full service.
    • No standards for a common interface. Standards have not yet been defined for virtual assistants. This means that the integration effort may be considerable or, worse yet, may make it impractical to offer access to many assistants within the car.
    • No common testing procedure. Without a standardized testing framework or procedure, a large test and validation burden is passed on to automakers for each new assistant. As the number of digital assistants proliferates, the car will be increasingly blamed for glitches falling through the cracks of a test plan.
    • Geographic localization. Users tend to favor virtual assistants developed by and marketed for their region. For example, Chinese users favor assistants developed by the BAT group (Baidu, Alibaba, and Tencent) while North American users prefer Amazon Alexa or Google Assistant.
    Such challenges are at the heart of our multi-assistant solution called the Virtual Assistant Router. Designed to connect with multiple assistants from different providers, the system leverages each assistant’s specific strengths in one seamless transaction for the end user. Drivers can invoke the router without having to remember specific control words for each assistant. They can ask about the weather and have one assistant respond, inquire about a dinner reservation and have another respond, and so on. It’s like having a staff of personal assistants without the hassle of managing them.

    Our Virtual Assistant Router is also white-labeled. We provide automakers with a set of core features that they can customize using a number of different options. One of those options is to incorporate our personalized speech-synthesis solution – see my previous blog for details – so that users can easily train their assistant to sound like a familiar voice (husband, child, close friend, etc).

    Be sure to make a private appointment to see this and a whole host of other innovative technologies in our booth at TU-Automotive Detroit 2019.


    Sorin Panainte     Sorin M. Panainte 
    Senior Speech Engineer, Advanced Development

    Go comment!
  • Using Sound to Connect With Customers and Reinforce Your Brand

    | Nov 01, 2018
    Sound is a powerful trigger of memory and emotion. Try watching a horror movie with the sound off; chances are you won’t feel scared. Then try listening to the same movie with no visuals; you’ll be amazed at how frightening it is.

    Marketers have used our innate affinity to sound in order to distinguish brands for many years. Because humans are naturally good at memorizing melodic patterns, linking a sound with your brand is a great way to create a bond with customers and enhance brand recall. Some of the most widely known “soundmarks” of the last several decades include the McDonald five-note “I’m Loving It”, the Intel chime, and the Facebook Messenger notification. These are all perfect examples of sonic branding.

    Mitsubishi Electric sonic branding motorcycle (pretend it's a Harley)The American motorcycle manufacturer Harley Davidson is considered one of the pioneers in sonic branding. In 1994, the company filed a trademark application for the sound of its distinctive V-twin engine. They were in rarefied air, however – by 1998, only 23 sound trademarks had been issued in the United States while hundreds of thousands of other trademarks had been granted. While Harley Davidson eventually gave up their request, since then, more and more companies have been able to legally protect a variety of different sounds, including Tarzan’s yell, Darth Vader’s breathing, and Homer Simpson’s grunt. Harley Davidson was also ahead of the pack in that they realized the importance of distinguishing their brand at the point of customer interaction. Often referred to as environmental sound design, this type of sonic branding distinguishes your brand by enhancing the user experience.

    This is where sonic branding can have a real impact in automotive. It’s more than just infotainment interface sounds such as taps, button presses, and swipes. Think of all the ways in which your customer interacts with your car through sound – from the start-up sequence to the shutdown tune and everything in between: seatbelt alerts, engine noise, turn signals, incoming calls, door chimes, and so on. Even though we think of these sounds as merely functional, every one of these user interactions could be setting the tone for what it feels like to be in one of your cars.

    Mitsubishi Electric sonic branding old-world clockBentley Motors was another pioneer in this area. When developing their second generation Continental GT, the company created a unique audio identity by replacing all of the interior sounds with iconic old-world sounds in order to connect customers with the company’s classic British history and heritage. (For example, they replaced the mechanical sound of the turning light with the ticking sound of a grandfather clock.)

    Just like with visual brand assets, relevancy to the brand promise is critical. At Mitsubishi Electric, one of the things we’re exploring is how to use sonic branding to distinguish between vehicle models. For example, a sports car could give customers an exciting upbeat sonic experience while an eco-friendly model could offer a calmer one. An off-roader could sound playful and a luxury model more sophisticated. For cars that allow drivers to switch between different driving modes, sonic branding could be used to create audible experiences that reflect the difference drivers feel through various settings in the suspension, steering, and so on. And as we move into the future of quieter engines and autonomous vehicles, users will begin to rely even more on sonic branding and deliberate sound design to give them the sense of satisfaction and vehicle distinction they once felt by manually controlling the car.

    If you’re at CES, drop by to see me and we can chat! Whatever you do, don’t overlook the way in which users interact with the audio components of your vehicles. If you’re not using sound strategically, you’re missing a valuable way to interact with customers, build trust, and create preference.


        Sophia Mehdizadeh 
    Audio Engineer, Advanced Development

    Go comment!
  • Can We Talk? A Voice Assistant for Everyone

    | Oct 23, 2018

    Mitsubishi Electric customized voice assistantsVoice assistants may be getting smarter but they’re not getting any more interesting to listen to. In fact, their voices all pretty much sound the same. The question is: Why should an in-car assistant sound like a branded voice that everybody else is using?

    At Mitsubishi Electric, we’re challenging the status quo of branded voice assistants by offering a new level of customization. With our solution, drivers can personalize the voice of their in-car assistant by using their own voice or another familiar voice – perhaps that of their fiancé, their child, or a close friend – to make their assistant sound more familiar and give them a unique identity that no one else has. Drivers can also use one voice for a while and then choose another, eliminating the dullness that comes from hearing the same voice over and over.

    A well-crafted and great-sounding voice is a cornerstone of a great in-vehicle user experience. At the Paris Motor Show, Peugeot announced a voice-activated virtual assistant suitable for autonomous driving in their new “e-Legend” concept car. As its voice, Peugeot adopted a digitized version of Gilles Vidal, its Director of Styling, and enabled "him" to speak in 17 languages.

    Now you’re probably wondering about the complexity and the amount of time it takes to create a new voice. In the past, the amount of time was significant – a couple months of work – and involved many hours of speech recordings by a professional voice actor, in a professional studio, under the supervision of a linguistic expert. But this is no longer the case due to new synthetic voice generation algorithms, a direct result of the new machine learning evolution of the last few years.

    Our next-gen infotainment solution, FlexConnect.AI uses a deep neural network (DNN) based solution that lets us create new voices on just a few minutes of speech recordings. This enables automakers to differentiate their offering in comparison to solutions from the major assistant providers.

    As an example of what a customized voice can sound like, I've trained the system on my voice and fed it a simple text sentence to speak. Nearly as good as the original!

    Please contact us to schedule a private demonstration at TU-Automotive Detroit 2019, where you can see this exciting technology – and more!

    Sorin Panainte     Sorin M. Panainte 
    Senior Speech Engineer, Advanced Development

    Go comment!
  • Avoiding the Risky Ride: 10 Must Haves for Self-driving Cybersecurity

    | Oct 15, 2018
    A malicious agent doesn't need to dictate an autonomous car's position to create havoc. They only need to influence the autonomous vehicle's piloting system through its many sensor inputs to make decisions that a human driver wouldn't – whether that's steering into lanes full of traffic, driving through crowded intersections, or purposefully diverting traffic into (or away from) selected locations. Autonomous car hackers could flood a controller with meaningless random data from imaginary sensors. They could feed an empty roadway video signal to compromised cameras. They could interfere with V2X signals so that "green lights" are broadcast to all directions in an intersection. Or they could fool traffic into navigating around false road closures. While a car's multiple collision avoidance measures may be able to minimize some attempts to subvert the car's route, at a minimum a successful hack would still cause traffic snarls and fender benders. And highly virulent or impactful attacks could allow a malevolent entity to temporarily shut down a transportation grid and halt the heartbeat of a city.

    What to do? While cybersecurity is a mindset and a never-ending discipline, we've put together our top ten checklist of things that every autonomous vehicle builder must consider when securing their next generation products, especially those modules that affect steering, braking or motion of the vehicle. 

    1) Hardware security. Modern CPUs used in autonomous automotive designs have hardware-accelerated cryptography engines and a facility for hardware-secure key storage. Use them. Configuring your system to take advantage of hardware crypto and key storage shouldn't be a herculean effort. But it is very easy to overlook since software-implemented default mechanisms will work out of the box. Just remember to double-check that you're using the hardware when you can - the superior performance and security is well worth the effort.

    2) Secure boot. Similarly, your hardware should have secure boot capability and if you're not taking advantage of it you're making it unnecessarily easy for hackers. A secure boot can drive some low-level architectural decisions but it's a definitive must-have to ensure that the system executes only authenticated firmware images. This is something that a capable OS vendor should be able to help you design and implement.

    3) Encrypt data at rest. You know what bits of data in your system are sensitive – keys, customer data, passwords, logins – just make sure you treat them that way. Data "at rest" (that is, stored for later use) must always be encrypted and decrypted at point of use. Don't forget to zero-fill any buffers that contained sensitive data before releasing them back to the system. And please use provable encryption standards like AES, Triple DES, or Blowfish/Twofish to secure that data – XORing your data with a disguise pattern is really asking for trouble.

    4) Remove backdoors. It's easy and helpful to build software with backdoors that really help your development team, hidden features that assist your testing and validation efforts, and secret codes or logs that aid your customer support staff. But all of these risk putting insightful internal data and valuable new tools into the hands of the bad guys. The simple joy of developer bypasses or secret Easter eggs will compromise your product. Don't hide unneeded features, and if you've got multiple levels of features, enable them through a securely authenticated and encrypted mechanism.

    5) Validate all input. Every developer knows they need to validate program inputs, right? Yes, of course. But unfortunately SQL injection attacks still exist. Embedded engineers tend to be pretty rigorous in validating user inputs but there are still many blind spots in what developers consider to be inputs worthy of validation. Assume that anything the program relies on – even private configuration files, remote APIs, and protected database fields – are filled by keyboard-wielding monkeys.

    6) Don't mishandle credentials. Credential breaches can be one of the most damaging results of a successful hack. On the in-vehicle systems, don't store credentials in plaintext without appropriate security in place, don't store credentials temporarily, and consider whether credentials even need to be stored at all. All of these precautions especially apply for any back-end services that your autonomous car may rely on. Assume that the code in your cloud service is talking to malicious and compromised vehicles, and guardedly protect credentials everywhere they are used.

    7) Securely update software. Over-the-air updates are a valuable tool for automakers to update cars and even more indispensible for autonomous vehicles. They're also a necessary part of cybersecurity vigilance, allowing patches to be deployed, and older vulnerable libraries to be updated. But they're designed to do exactly what hackers dream of doing – replacing existing components of a running system with new ones. Make sure that your OTA software provider is using strict security protocols, that your updates are signed and authenticated, and that all security measures that can be enabled, are.

    8) Remove debug assist. Discovering and reproducing bugs encountered in the field is tremendously difficult. Unfortunately, the tools needed to uncover those bugs fly in the face of cybersecurity best practice and create a hacker's paradise. Don't leave ssh or ftp on the system, don't leave any programmer utilities on the disk image, and remove any hidden features from code and configuration files. You may need all those things for development builds, but make sure your production builds are airtight and locked down.

    9) Update or remove components. Open source libraries offer lots of functionality, embedded operating systems are chock full of features, and pre-built packages exist to simplify building and configuring software. But these time-saving tools often leave behind remnants – unused components that either nobody knows what they do, or that everyone's afraid to remove in case they might be needed one day. Those unused modules are just waiting for hackers to misuse them. Cleaning out your closet isn't fun but it feels great once you're done – and similarly, you need to perform an audit of all the software installed on your system. Be ruthless in removing things that aren't needed. And if a component is needed, aggressively update it to the latest, most secure version that has fixed all known vulnerabilities. Don't let the convenience of "leaving things alone that aren't broken" be the reason hackers break into your cars.

    10) Remove unused access. Similar to the above comment about removing unused components, there are often unused ports, protocols, and accounts on a system that are leftovers from development or things negated by newer developments. Get rid of everything that a properly running system doesn't need. Don't forget hardware in this purge. Unless your device actually uses Ethernet and USB during run-time, don't leave those connectors populated on the board. Disable any USB to JTAG interfaces at a hardware level and remove JTAG pins or any other specialized debug connectors. If an interface isn’t needed, it shouldn't be found on any shipping hardware.

    While that wraps our top 10 checklist for basic security requirements for autonomous cars, there's a lot more to automotive cybersecurity than a single blog can cover. I'll be talking about these items and more during my talk "A Risky Ride? Cybersecurity for AVs" at ADAS & Autonomous Vehicles 2018, Tuesday October 16. Please drop by if you'd like to chat about this blog or anything else cybersecurity related. Think we missed something in our checklist? Let us know in the comments.


    Kristie Pfosi     Kristie Pfosi
    Senior Manager, Automotive Cyber Security

    Go comment!
  • The Perfect Roadtrip DJ

    | Oct 10, 2018
    A great DJ is the soul of a nightclub, reading the crowd’s pulse and spinning up the right tracks to get people movin’ and groovin’.

    Ever wish cars could do that – read the likes and dislikes of everyone in the car and play the exact music that everyone's in the mood to hear?

    Mitsubishi Electric community DJAccording to Ford research, the car is the place where the largest percentage of people listens to music – including roughly seven out of 10 people between the ages of 13 and 24, and nearly 8 out of 10 drivers over the age of 45. What’s more, nearly half of surveyed participants indicate that they would pay more money for higher-quality audio. Given that a new generation of drivers has grown up in an era of on-demand music streaming, an in-car DJ could help digital natives feel connected to their cars and better enjoy the driving experience.

    It’s not mind reading – Nissan is actually working on that – but the ability to cue up music that all of the car occupants can agree on. Sound intriguing? Here’s what we've been looking into.

    First, we identify who’s in the car via brought-in devices and our Driver Monitoring System. This includes basic information like age and gender, and even emotional state. Then, we correlate this data with information that the car determines about each person's music tastes, such as:
    • Frequency of songs appearing in playlists
    • Metadata of songs that have been played
    • Audio streaming sources and subscription channels (Spotify, Pandora, Apple Music, etc)
    • Songs that are skipped within the first few seconds or played all the way through
    Mitsubishi Electric community DJ diagramWith this information, we can train a recommender system with genres, artists, music sources, and preferences for novelty versus consistency. Ultimately, we build a Venn diagram of musical interests for everyone in the car, and play songs from the intersection that we predict all passengers can enjoy.

    How might this work? Let's say I love country music and my friend loves classic rock. Our Roadtrip DJ could create a playlist heavy on Neil Young, the Eagles, Shania Twain, the Band, and CCR. Too easy? How about a son who loves heavy metal and a dad who loves classical music – our Roadtrip DJ could play Yngwie Malmsteen with the Japanese Philharmonic Orchestra, Metallica’s S&M, or Rammstein’s XXI Klavier.

    We've also been thinking about how to make the solo driving experience more enjoyable through music as well. Much like the above scenario, we can acquire contextual information about the driver and the drive, such as:
    • Trip departure points and destinations
    • The driving situation – in high traffic, on a long drive or an empty road late at night
    • The driver’s emotional state
    • Special dates – holidays, birthdays, celebrations, vacations, anniversaries
    • Busy times – when there are lots of calendar meetings or incoming calls
    Synthesizing all of this information, we can play music that's highly personalized for different needs, like a relaxing selection before a big meeting, an energized playlist for a long drive, or a fun mix before a party.

    What kind of DJ would we imagine for our autonomous future? We've got some great ideas for that too – but you'll have to wait for my next blog for the big reveal.


    Jacek Spiewla     Jacek Spiewla
    User-Experience Manager, Advanced Development

    Go comment!
  • Automotive's Strange Changes

    | Sep 20, 2018

    Software is bringing strange changes to the field of automotive.

    Not because cars are increasingly defined by and differentiated through their software – we've been adjusting to that for over a decade - or because software has development cycles in months rather than years. What I'm talking about is software's driving need for cybersecurity, which is a force that is bringing unity to the automotive ecosystem.

    Everyone – the consumer, automaker, and supplier – is negatively impacted by vehicles being hacked. It's not just that one model that's affected; the trust in an entire ecosystem can be shaken. Good cybersecurity cannot afford to be a market differentiator; it's a required baseline.

    As numerous other industries have shown, cybersecurity measures are most effective when they're cooperative. Vulnerabilities need to be communicated to others in the ecosystem as soon as they happen. Techniques and tools should be shared so everyone can use best development practices, proper data handling, and the latest patched libraries. And while cooperation isn't something that comes naturally to automotive, we've been making amazing strides.


    Auto-ISAC September 25 2018

    That's why I'm proud to have been elected by the Auto-ISAC board of directors as the Auto-ISAC 2018 Summit Chair. In that position, I've been responsible for planning and organizing this year’s event. I'll be providing a brief welcoming address to kick off the conference, inviting OEMs, tier ones, cybersecurity providers, and government agencies to the stage. There promises to be a plethora of collaborative discussions on legislative and regulatory policy, incident response and vulnerability management, and building a future-cybersecurity workforce. (If you haven't signed up yet, there's still time!)

    If it seems like one big family, that's because it feels like it. Within the cybersecurity space, I can sit side-by-side with my colleagues from competitors like Bosch, Lear, and Continental, and customers like GM, Ford, and FCA – all working together on serious issues and how to mitigate them.

    Everyone's passionate about the same goal: making cars safe. And that spirit of cooperation is a change for the better.

    Kristie Pfosi     Kristie Pfosi
    Senior Manager, Automotive Cyber Security

    Go comment!
  • Enabling the in-car cocktail party

    | May 09, 2018

    It's not what you think - we're not talking about drinking and driving! Creating an automotive speech recognition system that works well under all conditions has always been a very challenging proposition. The car’s acoustic environment has lots of loud, non-predictable sounds that compete with the driver’s voice – like wind noise, road noise, and traffic. You may be able to guess another big noise source from inside the car, especially if you’re a parent – the other occupants. Whether it’s children, companions, or colleagues, it’s not always easy to stop all chatter just so the car has a chance of recognizing what you say to it. Recognizing one speaker in a crowd is the so-called “cocktail party problem,” and it’s been a very difficult one for computers to solve, especially within the car.

    Friends talking in carThis is why we’ve developed technology that can distinguish and separate multiple simultaneous voices, making the vehicle’s speech recognition more robust, useful and accurate. Allowing the car to recognize hands-free commands without requiring other conversations to stop provides more natural speech interaction and a better user experience for everyone in the car. And since this solution only needs a single microphone, it doesn’t introduce additional hardware expense to the car. Great – but according to the US Census Bureau’s latest data, over 76 percent of commuters drive alone and therefore don’t need to worry about competing voices.

    Yes, but that’s today. In a few short years, perhaps even within one or two traditional automotive design cycles, mobility will be drastically transformed. Mobility as a Service (MaaS) is taking off and promises the ability for people to dynamically choose their mode of transport, with on-demand vehicle subscriptions and multiple car- or ride-sharing models.  Far from removing the need to talk to your car, we believe self-driving technology will result in even more opportunity to talk to, direct, and control your car. The future is multi-passenger – and conversational.

    Digital assistants like Amazon Alexa and Google Assistant are the hottest thing since sliced bread. With greater exposure, reliability, and comfort in a digital assistant, people will be increasingly relying on them to perform tasks in the car – tasks that don't rely on their eyes or hands.  That’s why we don’t just need digital assistants in our cars, we need ones that can listen to each one of us, even when we’re all talking.


    Jacek Spiewla     Jacek Spiewla
    User-Experience Manager, Advanced Development

    Go comment!
  • Speaking to Smarter Cars

    | May 03, 2018

    If you or I were eavesdropping on someone’s conversation we’d probably be able to tell if the speaker was a man or a woman, if they were young or old, maybe even where they lived or their first language. Computers, on the other hand, have no such ability; in their attempts to understand human speech they often overlook contextual details and user characteristics that people naturally absorb.


    Speech recognition is one of the most challenging areas of computer science. While improvements have been slow to materialize, the technology is finally beginning to become a useful tool – as anyone who uses a virtual assistant can tell you. That said, the next major challenge in computerized speech recognition is understanding the contextual details of speech and the personal characteristics of those speaking.


    What would this advancement mean for cars? We believe it would be a solid building block for improving the in-car user experience. Let’s look at a couple of cases where a car can help its users once it can “listen between the lines.


    • JuanJuan asks his car to “Enciende la radio”. Since the car doesn’t understand that command, it checks it against a few different languages in its library and determines that the request is in Spanish. It then turns on the radio as requested and automatically switches the infotainment system, speech recognition, and instrument cluster settings to Spanish.


    • Olivia uses voice recognition to authorize herself as a new driver of her parent’s sedan. The car recognizes her and auto-sets the “safe-driving” profile her parents have configured for her. Because the car recognizes her voice as that of a young woman, it fine-tunes the speech recognition models using a higher pitch for better accuracy while switching the default satellite radio presets from classic rock to electronic dance music.


    • Josephine flies from Montreal to Atlanta and starts up her rental car. The car detects her French accent and prompts her to see if she would like to switch to French. As Josephine is very comfortable in English, she replies “No”. The car then asks if she would prefer metric measurements and she readily answers “Yes”, so the car switches its units to metric.


    • As a Brooklyn native, Tony has never before owned a car. His brand-new car detects his New York accent and asks if he would like to enable the “urban native” features – prompted street parking warnings, parking lot pricing display, and automatic traffic avoidance suggestions. Tony knows the city but not how to navigate it by car, so he enables them right away. 

    These are just a few examples of how the determination of voice characteristics can help improve the user experience in a car. We at Mitsubishi Electric are looking into this along with how voice context can be merged with information from social networks to let the car guide preferences in music, shopping, restaurants, or even the in-car environment. With some appropriate smarts, the car can be an amazing accompaniment to a predictably perfect experience.

    Jacek Spiewla     Jacek Spiewla
    User-Experience Manager, Advanced Development

    Go comment!
  • Let It Snow

    | Feb 16, 2018
    Snowy winter driving

    If you live in the Snow Belt, you might be tempted to take autonomous driving technology with a dose of (road) salt. You’ve driven in winter and you know what it’s about. Sometimes the road is covered with black ice or snow flies off another car’s roof, or chunks of ice litter the road – sometimes it’s hard to see the road at all making you wonder how today’s ADAS features function in inclement weather. You may even have fond memories as a young driver practicing your winter driving skills in an empty snow-covered parking lot with Mom or Dad. If you do, you know that self-driving cars are going to need their own snowy driving drills before they are up for the task.


    That’s exactly what we’re doing with our self-driving technology at Mitsubishi Electric. Our high-precision mapping technology allows us to always know where the road is – even if it’s covered in a white blanket of snow. This is crucial for self-driving cars in winter and has been a major stumbling block for the industry to date. We’ve already tested it with our own “parking lot” test and road tests are currently in progress.


    If you’ve been watching the PyeongChang Winter Olympics like I have, it’s hard not to be astounded at the world’s best athletes shredding the slopes. For the more adventurous among you, those amazing skiing and boarding skills might have awakened the urge to hit the hill a bit yourself. Consider this: By the time the 2022 Winter Olympics start in Beijing, your car will be able to take you to your local ski resort and back – all by itself.

    Gareth Williams     Gareth Williams
    Executive Director of Advanced Development

    Go comment!
  • Portrait Displays and the 2019 RAM 1500

    | Jan 17, 2018
    CNET 2019 RAM 1500 Infotainment

    If you had a chance to see some of the more exciting announcements coming out of the Detroit Auto Show (aka NAIAS or the North American International Auto Show), you may have seen the new 2019 RAM 1500. If you did, I’m sure you didn’t miss the beautiful new 12” portrait-mode uConnect infotainment system. Here’s a glowing review from CNET RoadShow

    2014 Cadillac CTS demo with FlexConnect


    In that video, the reviewer Antuan Goodwin compares the RAM 1500 display to one in the Tesla Model S. That particular comment got us thinking – at Mitsubishi Electric, we’ve been doing huge portrait displays for a long time now. We were building portrait mode infotainment systems with our FlexConnect platform before the Model S made luxurious portrait displays the hot new thing. Here’s a 2014 Cadillac CTS outfitted with a portrait FlexConnect (conveniently parked in a cafeteria for our press event that year).




    FCA Police Charger w/ FlexConnect

    We think portrait mode displays are the future – and Praveen Chandrasekar at Frost & Sullivan agrees in his piece about the Volvo XC90 display. As another example, we supply the mobile command center for a different FCA product we can talk about: the Law Enforcement Dodge Charger. 


    We love the 2019 RAM 1500 infotainment system, and we’re really happy it’s getting rave reviews. We’ve always loved trucks, and we’re glad they’re getting some long overdue attention as one of the coolest cockpits at NAIAS.


    Gareth Williams     Gareth Williams
    Executive Director of Advanced Development

    Go comment!
  • CES 2018 - Day three highlights: The battle of the digital assistant

    | Jan 12, 2018

    CES 2018 bannerMoving out of the automotive-centric CES north hall into the south and central halls, one thing becomes clear. This is the year for digital assistants – primarily Amazon Alexa and Google Assistant – and they’re fighting for dominance in the technology landscape. Voice technology has finally reached the point where it’s usable by the general public without the hassle and pain of earlier solutions. For the car, that’s great news as voice assistants are more than just a cool feature – they can actually improve safety while increasing functionality and productivity.

    Hey Google Jeep dialog @ CES 2018

    Amazon is ahead in the number of announced integrations and they seem to be working hard to be an easy integration partner. Alexa-based rollouts in the works now include Toyota and Lexus, with previously announced members Ford, Volkswagen, Hyundai, and Volvo. Alexa has also made a big bid in the home, with ecosystem partners on the computer side like Acer, Asus and HP, as well as appliance manufacturers like Whirlpool and Kohler. As we see more home/car integrations, this ecosystem dominance will strongly tie the two together. We can immediately think of several use cases.


    This isn’t to say that Google is standing still. Partners include Honda, Hyundai, GM, and Kia. In the home, they’re going into LG ThinQ, JBL speakers, and Sony TV. Google is clearly making a push to try to unseat Amazon’s market lead.

    Some products incorporate both Google and Amazon digital assistants. Will they both be active simultaneously, dependent on keyword, or will they be enabled exclusively? Time will tell but we may end up seeing a couple of different approaches to handling multiple digital assistants.

    Of course, CES’s new auto-related tech isn’t all voice assistants. LIDAR companies claiming the fastest, cheapest, smallest, or otherwise best technology – from Velodyne, Quanergy, AEye, Luminar, LeddarTech, and Innoviz – are all jockeying for a position in a market that’s about to explode.

    Yamaha drone

    And while there are a huge number of drone companies with use cases from photography, security patrols, emergency supply delivery, and medical assistance, curiously none of them are showing automotive use cases like we’re demoing in our booth.

    It’s been a packed three days – it’s almost time to look forward to next year!


    Gareth Williams     Gareth Williams
    Executive Director of Advanced Development

    Go comment!
  • CES 2018 – Day two highlights: Bringing it home

    | Jan 11, 2018
    Ford 'living street' @ CES2018We’ve been saying it for some time and it’s our theme for the show: Automakers no longer want to be defined by the traditional (and narrow) definition of automotive but want to be seen as mobility providers. Moreover, they want to influence not only how we commute but also our lifestyle. Nowhere was this more evident on the CES show floor than with Ford. This Motorcity giant has an impressive vision for the connected city. Their booth is in fact a “living street”, complete with self-driving delivery car that can cut down on traffic and parking hassles by allowing groceries, dry cleaners, and other businesses to share delivery vehicles. This makes more room for green living spaces – and people. Definitely worth a visit.


    Jeep made a solid lifestyle play on their booth with a Home-to-Car demo that features Amazon Alexa and Hey Google. Owners of the 2018 Jeep Cherokee Latitude with the optional “Tech Connect” package can now ask their favorite digital assistant to remotely start and stop the engine, lock and unlock the doors, monitor vehicle vitals, and more.


    Honda autonomous ATV @ CES2018Honda rules the roost with robotics but another standout is their autonomous ATV, built on their existing rugged ATV chassis, and with heavy-duty rails on top to accept multiple different accessories. It’s not only interesting for things like fire and rescue, and construction, but Honda also imagines this for personal use. Want extra help around the ranch – feeding cattle or cutting weeds? What about plowing the snow out of your driveway, hauling rocks, or raking leaves? Program your rugged D18 to take care of it.


    Mercedes BMUX @ CES2018Mercedes had some very cool steering-wheel haptic HMIs that showed off their new MBUX user interface with natural language integration – CNET has a great detailed review. It’s almost hard to look at the HMI as it’s right next to their gorgeous AMG electric supercar – 0-200 km/h in under 6 seconds – but we managed and so should you.


    Although things continue to be extremely busy on the Mitsubishi Electric booth, we hope to get further afield tomorrow to check out the drone and robotics technology. Once again, stay tuned!


    Gareth Williams     Gareth Williams
    Executive Director of Advanced Development

    Go comment!
  • CES 2018 – Day one highlights: Industry synergy

    | Jan 10, 2018
    We spent most of our time in the North Hall on day one where the majority of automotive tech is focused – sandwiched in-between other meetings, of course. What did we see?


    Toyota 'beloved intelligence' @ CES2018As expected, AI and autonomous were focal points for many. Toyota showed their “beloved intelligence” concept car – a vehicle that learns what its owner wants and tries to deliver more of the same, much like a favorite pooch. Kia also had something similar in their Niro EV– watching drivers’ faces to predict their music and playlists. Both of these concepts are things we’ve discussed as predictive HMIs (and of which we’re demoing our own flavor). Something we hadn’t seen before: Nissan and their “mind reading” technology. Like a Muse headband but for the car; the idea is to make the car a partner in the driving activity rather than a replacement by predicting early on what drivers might do and helping the car take action or warn of trouble situations. Definitely intriguing use of wearable technology but it’ll be interesting to see if this has any legs when it comes to consumer usability.


    Byton @ CES2018EV continues to gain traction; every automaker we saw had more EV models in the pipeline – Kia announced 16 new EV models by 2025 – and some like Hyundai are even looking at hydrogen fuel cells as an alternative option.  Chargepoint had a great booth showcasing their latest charging and power stations. And taking the place of last year’s buzz around Faraday Future was the buzz over Byton, another Chinese-funded, cleanly designed and incredibly sexy EV vehicle. It’s the next competitor in a long line (like the Karma, NextEV, Lucid, etc) that is attempting to unseat Tesla. Will they be successful? Check back next year to see.


    Notable too was the surge of 5G related technology and announcements –Qualcomm, Badiu, Verizon, and Nokia are all throwing in their hats. Think fully mobile network with WiFi data speeds and you’ll sense the excitement. Qualcomm also discussed C-V2X, or cellular V2X – using car-to-car LTE for fast and low-latency in a way that could finally make V2X a reasonable proposition. We’ll have to wait for the network operators to make it ubiquitous before the automakers jump on board but they’ve already started dipping in their toes … it won’t be long.


    Hyundai home @ CES2018Finally, we saw automakers stretching their market muscle into non-automotive applications of their technology like Hyundai with their hydrogen powered home or Toyota with their personal mobility “scooter”. We’ll expect to be seeing a lot more of this trend as soon as we get the chance to cover more of the floor! Stay tuned.


    Gareth Williams     Gareth Williams
    Executive Director of Advanced Development

    Go comment!
  • Autonomous reality check: The pressing need for highly accurate maps

    | Jan 09, 2018
    Five years ago, the most sophisticated in-car navigation system displayed roads using GPS to locate a vehicle within five yards, providing fairly reliable turn-by-turn driving directions. However, the system could be wrong by 50 yards in densely populated areas with urban canyons, and fail completely in tunnels. Too many of us have horror stories of ending up in a sand pit instead of a campground or someone’s driveway instead of a parking lot. The proliferation of Google Maps on smartphones somewhat improved the situation in that most maps were updated far more frequently than their in-car cousins but as a GPS-based system, its accuracy in all situations was still unreliable.


    For the coming generation of autonomous vehicles, the convenience of low-precision maps is no longer enough. Centimeter-level accuracy becomes critical.


    A few years ago, some automakers had hoped that autonomous vehicles might be able to position themselves using low-definition maps and high-powered sensors. With clear road markings, visual sensors could keep cars safely within lanes and spot the dotted lines indicative of exits.


    The problem is a fully driverless car needs to operate safely in all environments and under all conditions. LIDAR has an effective range of around 50 yards but that can dwindle significantly in a snowstorm or when other vehicles obscure objects. Even the smartest car travelling on a freeway can only “see” ahead of itself about a second and a half. Self-driving cars need to be able to anticipate turns and junctions far beyond their sensors’ horizons. More importantly perhaps they also need to locate themselves precisely as an error of a couple of yards could place a driverless car in oncoming traffic.


    That’s why we’re working to help advance autonomous technology on two fronts: building highly accurate maps and building highly accurate position determination into cars.


    Mitsubishi Electric MMS scanning systemTo create high-precision 3D maps, we’ve developed the Mobile Mapping System (MMS), a self-contained sensor-studded platform with multiple cameras, LIDAR, GNSS receivers, and processors mounted on a vehicle. The system scans a road segment to create a comprehensive and highly accurate digital representation of that road – one that is used for both training autonomous systems as well as for creating extremely accurate maps.

    For high-accuracy position determination, we provide a number of technologies that are integrated into the car’s sensor network. We combine inputs from several satellite positioning systems to improve the traditional accuracy that comes from using only one. (Our experience in satellites includes the latest satellite positioning network sponsored by the Japanese government.) We also have technology that improves the accuracy of standard dead reckoning systems – augmenting wheel tick sensors and rough directionality with camera images that track road position and speed – resulting in far better position estimations in the absence of satellite signals.

    We’re working closely with industry mapping experts to further augment our technology. Visit our website for more information and then drop by our booth at CES to learn more.



    Mark Rakoski

    Vice President of Sales, Engineering, and R&D Global Business Development

    Go comment!
  • Harmonizing the home and the hatchback

    | Jan 05, 2018

    The smart home market is growing almost as rapidly as the connected car market but so far the two have yet to fully converge and what their consolidated future will look like is still up for grabs. When most people envision the connection between car and home, they often think of integrated media, navigation, or energy management. We at Mitsubishi Electric think of much more. In fact, we see cars working in harmony with smart homes in a way that offers exciting new conveniences and cost savings in a seamless and intuitive manner.

    Car-to-home integration
    Predictive integration
    It’s easy enough to imagine a driver manually using a vehicle interface to turn up their home’s heating on the way home from work. Or using a smart home assistant to check on the car’s charge status or fuel level. But imagine the benefits that could result from the car and home working together on their occupants’ behalf without continuous human intervention, using a blend of home-to-car connectivity and predictive HMI technology.

    For example, the car could automatically notify the home at the end of the work day as it gets progressively closer so that the home could bring its temperature up or down according to owner preferences and time of year. Once the car signals to the home that it’s pulling into the driveway, the home could turn on the lights for a welcoming arrival. Conversely, as the last occupant leaves for the day, the car could alert the home that everyone was gone, asking it to adjust the temperature, turn off the lights, lock the door, and arm the security system.

    Voice-activated integration
    Digital assistants with conversational speech interfaces can provide another key technology to bridge the car and home. Market researcher Ovum forecasts the digital-assistant market will grow from 3.6 billion in 2016 to 7.5 billion by 2021. This means there will be almost as many digital assistants in 2021 as there are humans on the planet today, creating a tremendous opportunity for the automotive industry.

    Virtual assistants like Amazon Alexa, Apple Siri, or Microsoft Cortana could provide a sense of delight that keeps car ownership sexy. Imagine your customer’s surprise the very first time their assistant asks, “I see you have an appointment across town in an hour; would you like me to warm up the car?”

    New mobility integration
    Home-to-car integration may be just as useful in new mobility scenarios; add in digital assistants and you’ll have some killer applications. For example, vehicle owners in car sharing pools could use a virtual assistant to find out what’s happening with their car – where it is, who’s driving it, who’s in the passenger seat, and how fast it’s going. And, if they need to take their car back for the day, they could find out when it’s expected to complete its current trip and arrive home.

    Mitsubishi Electric has one of the best-integrated infotainment offerings, FLEXConnect.AI, which makes it perfectly positioned for developing car-to-home and home-to-car use cases. We’re also experts at virtual assistant integration. We’ll be showing some innovative ways to take advantage of both trends in our CES 2018 demo car. Visit our CES web page for more information and then make an appointment to see us next week in Vegas.

    Jacek Spiewla     Jacek Spiewla
    User-Experience Manager, Advanced Development

    Go comment!
  • Biometrics: The key to your car

    | Jan 03, 2018

    Advanced technology, which makes cars harder to steal, also makes car keys harder to copy. So while losing a car key has never been fun, these days it’s an expensive proposition that most consumers would rather avoid. One solution is to let wearables act as car keys. With at least a quarter of American adults owning a wearable and estimates on wearable adoption continuing to rise, this may be the ideal way for cars to recognize and authenticate their owners. At Mitsubishi Electric, we’ve been investigating a biometric wearable strategy for vehicle authentication and have come up with the following ideas we’d like to share.


    Lifestyle branding

    Car-aware wearables could provide automakers with the perfect way to extend a premium brand onto an owner’s wrist. A Cadillac, Lexus, Mercedes, or Porsche wrist strap would allow consumers to enter their cars while acting as an extension of those same brand qualities into other mobility spaces. Rather than a single-purpose widget, an OEM-branded biometric wearable could interface with a smart home, favorite apps, personal devices, or enterprise equipment, giving OEMs a much broader mobility presence. This capability could also take advantage of consumers’ existing wearables with branded applications on FitBit, Apple Watch, or Android Wear.


    Car-sharing convenience

    Biometric wearable unlocking carBiometric wearables could make a great accessory for car sharing. By authenticating the owner through their biometric identification, a car sharing service could guarantee proper access to whatever vehicles an individual is allowed to use without needing to worry about pass codes or physical keys. That freedom could extend to car subscription models where someone swaps cars on a dealer lot at will, with a person’s biometric data providing the link to an online account, payment, and insurance details.


    In addition to replacing the physical key, car rentals, car sharing, and ride sharing would all benefit from the car’s ability to confidently and uniquely recognize an individual through personalization. Biometric identification would allow a car to automatically access and provide many convenience features like smartphone pairing, calendar syncing, and routing to preset locations from either personal mobile or cloud repositories. People’s preferred traveling identities would follow them wherever they go without worry about smartphone loss or theft.


    Remote control

    With a biometric wearable and a smart watch, people would have the ability to remotely control their car from anywhere with more identity assurance than a physical key fob. Your customer could roll down the windows and unlock the car for the kids while still inside packing lunches. Or they could open the trunk to let the neighbor borrow a scissor jack while they’re on vacation.


    Increased security

    Accessing cars via biometric prints means that consumers can’t lose their “keys”. It also means that someone can’t steal someone else’s key and thus their car without the appropriate and unique fingerprint, iris, voice, or ECG. Biometric security also makes it far simpler for fleet or business owners to provision vehicles since transferring cars could take place via a few mouse clicks instead of a cumbersome exchange of key fobs.


    Improved UX
    Biometrics could identify everyone in the vehicle, not just the owner (or whomever’s got the key fob). This fine-grained information about a car’s occupants could provide a much improved experience, from individual mobile payments to per-seat personalization. With individual identification, the infotainment system could automatically adapt to the driver and front-seat passenger, and rear-seat entertainment could show everyone’s favorite shows.


    These are just some of the reasons we think biometrics will gain traction within automotive. We’ll be showcasing a subset of these use cases on our booth at CES 2018 – be sure to make an appointment to see them and our other innovations.

    Gareth Williams     Gareth Williams
    Executive Director of Advanced Development

    Go comment!