• One Assistant to Rule Them All

    | Nov 15, 2018
    Not all virtual assistants have been created equal. Some like Siri and Cortana are great at understanding human speech as well as sending texts and emails. Google Assistant is excellent at pulling up good search results and identifying songs. Alexa can set timers, integrate with home automation devices, and give news updates. Amy Ingram can schedule meetings like a pro. Fin can do a wide variety of tasks although it only integrates with Google calendar. The list goes on and will no doubt continue to expand. After all, the market for digital personal assistants is a lucrative one that is expected to reach more than $12 billion by 2021 with 1.8 billion active users.

    Capitalizing on this popular trend by incorporating it into cars seems like a good idea for automakers. However, I see a few challenges.
    • Deciding on a single assistant. Having choice in virtual assistants is great but the drawback to the growing number of them is that no one assistant offers everything in one convenient package. You’ll need to work with quite a number of assistants to offer anything close to a full service.
    • No standards for a common interface. Standards have not yet been defined for virtual assistants. This means that the integration effort may be considerable or, worse yet, may make it impractical to offer access to many assistants within the car.
    • No common testing procedure. Without a standardized testing framework or procedure, a large test and validation burden is passed on to automakers for each new assistant. As the number of digital assistants proliferates, the car will be increasingly blamed for glitches falling through the cracks of a test plan.
    • Geographic localization. Users tend to favor virtual assistants developed by and marketed for their region. For example, Chinese users favor assistants developed by the BAT group (Baidu, Alibaba, and Tencent) while North American users prefer Amazon Alexa or Google Assistant.
    Such challenges are at the heart of our multi-assistant solution called the Virtual Assistant Router. Designed to connect with multiple assistants from different providers, the system leverages each assistant’s specific strengths in one seamless transaction for the end user. Drivers can invoke the router without having to remember specific control words for each assistant. They can ask about the weather and have one assistant respond, inquire about a dinner reservation and have another respond, and so on. It’s like having a staff of personal assistants without the hassle of managing them.

    Our Virtual Assistant Router is also white-labeled. We provide automakers with a set of core features that they can customize using a number of different options. One of those options is to incorporate our personalized speech-synthesis solution – see my previous blog for details – so that users can easily train their assistant to sound like a familiar voice (husband, child, close friend, etc).

    Be sure to make a private appointment to see this and a whole host of other innovative technologies in our booth at CES 2019.


    Sorin Panainte     Sorin M. Panainte 
    Senior Speech Engineer, Advanced Development

    Go comment!
  • Using Sound to Connect With Customers and Reinforce Your Brand

    | Nov 01, 2018
    Sound is a powerful trigger of memory and emotion. Try watching a horror movie with the sound off; chances are you won’t feel scared. Then try listening to the same movie with no visuals; you’ll be amazed at how frightening it is.

    Marketers have used our innate affinity to sound in order to distinguish brands for many years. Because humans are naturally good at memorizing melodic patterns, linking a sound with your brand is a great way to create a bond with customers and enhance brand recall. Some of the most widely known “soundmarks” of the last several decades include the McDonald five-note “I’m Loving It”, the Intel chime, and the Facebook Messenger notification. These are all perfect examples of sonic branding.

    Mitsubishi Electric sonic branding motorcycle (pretend it's a Harley)The American motorcycle manufacturer Harley Davidson is considered one of the pioneers in sonic branding. In 1994, the company filed a trademark application for the sound of its distinctive V-twin engine. They were in rarefied air, however – by 1998, only 23 sound trademarks had been issued in the United States while hundreds of thousands of other trademarks had been granted. While Harley Davidson eventually gave up their request, since then, more and more companies have been able to legally protect a variety of different sounds, including Tarzan’s yell, Darth Vader’s breathing, and Homer Simpson’s grunt. Harley Davidson was also ahead of the pack in that they realized the importance of distinguishing their brand at the point of customer interaction. Often referred to as environmental sound design, this type of sonic branding distinguishes your brand by enhancing the user experience.

    This is where sonic branding can have a real impact in automotive. It’s more than just infotainment interface sounds such as taps, button presses, and swipes. Think of all the ways in which your customer interacts with your car through sound – from the start-up sequence to the shutdown tune and everything in between: seatbelt alerts, engine noise, turn signals, incoming calls, door chimes, and so on. Even though we think of these sounds as merely functional, every one of these user interactions could be setting the tone for what it feels like to be in one of your cars.

    Mitsubishi Electric sonic branding old-world clockBentley Motors was another pioneer in this area. When developing their second generation Continental GT, the company created a unique audio identity by replacing all of the interior sounds with iconic old-world sounds in order to connect customers with the company’s classic British history and heritage. (For example, they replaced the mechanical sound of the turning light with the ticking sound of a grandfather clock.)

    Just like with visual brand assets, relevancy to the brand promise is critical. At Mitsubishi Electric, one of the things we’re exploring is how to use sonic branding to distinguish between vehicle models. For example, a sports car could give customers an exciting upbeat sonic experience while an eco-friendly model could offer a calmer one. An off-roader could sound playful and a luxury model more sophisticated. For cars that allow drivers to switch between different driving modes, sonic branding could be used to create audible experiences that reflect the difference drivers feel through various settings in the suspension, steering, and so on. And as we move into the future of quieter engines and autonomous vehicles, users will begin to rely even more on sonic branding and deliberate sound design to give them the sense of satisfaction and vehicle distinction they once felt by manually controlling the car.

    If you’re at CES, drop by to see me and we can chat! Whatever you do, don’t overlook the way in which users interact with the audio components of your vehicles. If you’re not using sound strategically, you’re missing a valuable way to interact with customers, build trust, and create preference.


        Sophia Mehdizadeh 
    Audio Engineer, Advanced Development

    Go comment!
  • Can We Talk? A Voice Assistant for Everyone

    | Oct 23, 2018

    Mitsubishi Electric customized voice assistantsVoice assistants may be getting smarter but they’re not getting any more interesting to listen to. In fact, their voices all pretty much sound the same. The question is: Why should an in-car assistant sound like a branded voice that everybody else is using?

    At Mitsubishi Electric, we’re challenging the status quo of branded voice assistants by offering a new level of customization. With our solution, drivers can personalize the voice of their in-car assistant by using their own voice or another familiar voice – perhaps that of their fiancé, their child, or a close friend – to make their assistant sound more familiar and give them a unique identity that no one else has. Drivers can also use one voice for a while and then choose another, eliminating the dullness that comes from hearing the same voice over and over.

    A well-crafted and great-sounding voice is a cornerstone of a great in-vehicle user experience. At the Paris Motor Show, Peugeot announced a voice-activated virtual assistant suitable for autonomous driving in their new “e-Legend” concept car. As its voice, Peugeot adopted a digitized version of Gilles Vidal, its Director of Styling, and enabled "him" to speak in 17 languages.

    Now you’re probably wondering about the complexity and the amount of time it takes to create a new voice. In the past, the amount of time was significant – a couple months of work – and involved many hours of speech recordings by a professional voice actor, in a professional studio, under the supervision of a linguistic expert. But this is no longer the case due to new synthetic voice generation algorithms, a direct result of the new machine learning evolution of the last few years.

    Our next-gen infotainment solution, FlexConnect.AI uses a deep neural network (DNN) based solution that lets us create new voices on just a few minutes of speech recordings. This enables automakers to differentiate their offering in comparison to solutions from the major assistant providers.

    As an example of what a customized voice can sound like, I've trained the system on my voice and fed it a simple text sentence to speak. Nearly as good as the original!

    Please contact us to schedule a private demonstration at CES 2019, where you can see this exciting technology – and more!

    Sorin Panainte     Sorin M. Panainte 
    Senior Speech Engineer, Advanced Development

    Go comment!
  • Avoiding the Risky Ride: 10 Must Haves for Self-driving Cybersecurity

    | Oct 15, 2018
    A malicious agent doesn't need to dictate an autonomous car's position to create havoc. They only need to influence the autonomous vehicle's piloting system through its many sensor inputs to make decisions that a human driver wouldn't – whether that's steering into lanes full of traffic, driving through crowded intersections, or purposefully diverting traffic into (or away from) selected locations. Autonomous car hackers could flood a controller with meaningless random data from imaginary sensors. They could feed an empty roadway video signal to compromised cameras. They could interfere with V2X signals so that "green lights" are broadcast to all directions in an intersection. Or they could fool traffic into navigating around false road closures. While a car's multiple collision avoidance measures may be able to minimize some attempts to subvert the car's route, at a minimum a successful hack would still cause traffic snarls and fender benders. And highly virulent or impactful attacks could allow a malevolent entity to temporarily shut down a transportation grid and halt the heartbeat of a city.

    What to do? While cybersecurity is a mindset and a never-ending discipline, we've put together our top ten checklist of things that every autonomous vehicle builder must consider when securing their next generation products, especially those modules that affect steering, braking or motion of the vehicle. 

    1) Hardware security. Modern CPUs used in autonomous automotive designs have hardware-accelerated cryptography engines and a facility for hardware-secure key storage. Use them. Configuring your system to take advantage of hardware crypto and key storage shouldn't be a herculean effort. But it is very easy to overlook since software-implemented default mechanisms will work out of the box. Just remember to double-check that you're using the hardware when you can - the superior performance and security is well worth the effort.

    2) Secure boot. Similarly, your hardware should have secure boot capability and if you're not taking advantage of it you're making it unnecessarily easy for hackers. A secure boot can drive some low-level architectural decisions but it's a definitive must-have to ensure that the system executes only authenticated firmware images. This is something that a capable OS vendor should be able to help you design and implement.

    3) Encrypt data at rest. You know what bits of data in your system are sensitive – keys, customer data, passwords, logins – just make sure you treat them that way. Data "at rest" (that is, stored for later use) must always be encrypted and decrypted at point of use. Don't forget to zero-fill any buffers that contained sensitive data before releasing them back to the system. And please use provable encryption standards like AES, Triple DES, or Blowfish/Twofish to secure that data – XORing your data with a disguise pattern is really asking for trouble.

    4) Remove backdoors. It's easy and helpful to build software with backdoors that really help your development team, hidden features that assist your testing and validation efforts, and secret codes or logs that aid your customer support staff. But all of these risk putting insightful internal data and valuable new tools into the hands of the bad guys. The simple joy of developer bypasses or secret Easter eggs will compromise your product. Don't hide unneeded features, and if you've got multiple levels of features, enable them through a securely authenticated and encrypted mechanism.

    5) Validate all input. Every developer knows they need to validate program inputs, right? Yes, of course. But unfortunately SQL injection attacks still exist. Embedded engineers tend to be pretty rigorous in validating user inputs but there are still many blind spots in what developers consider to be inputs worthy of validation. Assume that anything the program relies on – even private configuration files, remote APIs, and protected database fields – are filled by keyboard-wielding monkeys.

    6) Don't mishandle credentials. Credential breaches can be one of the most damaging results of a successful hack. On the in-vehicle systems, don't store credentials in plaintext without appropriate security in place, don't store credentials temporarily, and consider whether credentials even need to be stored at all. All of these precautions especially apply for any back-end services that your autonomous car may rely on. Assume that the code in your cloud service is talking to malicious and compromised vehicles, and guardedly protect credentials everywhere they are used.

    7) Securely update software. Over-the-air updates are a valuable tool for automakers to update cars and even more indispensible for autonomous vehicles. They're also a necessary part of cybersecurity vigilance, allowing patches to be deployed, and older vulnerable libraries to be updated. But they're designed to do exactly what hackers dream of doing – replacing existing components of a running system with new ones. Make sure that your OTA software provider is using strict security protocols, that your updates are signed and authenticated, and that all security measures that can be enabled, are.

    8) Remove debug assist. Discovering and reproducing bugs encountered in the field is tremendously difficult. Unfortunately, the tools needed to uncover those bugs fly in the face of cybersecurity best practice and create a hacker's paradise. Don't leave ssh or ftp on the system, don't leave any programmer utilities on the disk image, and remove any hidden features from code and configuration files. You may need all those things for development builds, but make sure your production builds are airtight and locked down.

    9) Update or remove components. Open source libraries offer lots of functionality, embedded operating systems are chock full of features, and pre-built packages exist to simplify building and configuring software. But these time-saving tools often leave behind remnants – unused components that either nobody knows what they do, or that everyone's afraid to remove in case they might be needed one day. Those unused modules are just waiting for hackers to misuse them. Cleaning out your closet isn't fun but it feels great once you're done – and similarly, you need to perform an audit of all the software installed on your system. Be ruthless in removing things that aren't needed. And if a component is needed, aggressively update it to the latest, most secure version that has fixed all known vulnerabilities. Don't let the convenience of "leaving things alone that aren't broken" be the reason hackers break into your cars.

    10) Remove unused access. Similar to the above comment about removing unused components, there are often unused ports, protocols, and accounts on a system that are leftovers from development or things negated by newer developments. Get rid of everything that a properly running system doesn't need. Don't forget hardware in this purge. Unless your device actually uses Ethernet and USB during run-time, don't leave those connectors populated on the board. Disable any USB to JTAG interfaces at a hardware level and remove JTAG pins or any other specialized debug connectors. If an interface isn’t needed, it shouldn't be found on any shipping hardware.

    While that wraps our top 10 checklist for basic security requirements for autonomous cars, there's a lot more to automotive cybersecurity than a single blog can cover. I'll be talking about these items and more during my talk "A Risky Ride? Cybersecurity for AVs" at ADAS & Autonomous Vehicles 2018, Tuesday October 16. Please drop by if you'd like to chat about this blog or anything else cybersecurity related. Think we missed something in our checklist? Let us know in the comments.


    Kristie Pfosi     Kristie Pfosi
    Senior Manager, Automotive Cyber Security

    Go comment!
  • The Perfect Roadtrip DJ

    | Oct 10, 2018
    A great DJ is the soul of a nightclub, reading the crowd’s pulse and spinning up the right tracks to get people movin’ and groovin’.

    Ever wish cars could do that – read the likes and dislikes of everyone in the car and play the exact music that everyone's in the mood to hear?

    Mitsubishi Electric community DJAccording to Ford research, the car is the place where the largest percentage of people listens to music – including roughly seven out of 10 people between the ages of 13 and 24, and nearly 8 out of 10 drivers over the age of 45. What’s more, nearly half of surveyed participants indicate that they would pay more money for higher-quality audio. Given that a new generation of drivers has grown up in an era of on-demand music streaming, an in-car DJ could help digital natives feel connected to their cars and better enjoy the driving experience.

    It’s not mind reading – Nissan is actually working on that – but the ability to cue up music that all of the car occupants can agree on. Sound intriguing? Here’s what we've been looking into.

    First, we identify who’s in the car via brought-in devices and our Driver Monitoring System. This includes basic information like age and gender, and even emotional state. Then, we correlate this data with information that the car determines about each person's music tastes, such as:
    • Frequency of songs appearing in playlists
    • Metadata of songs that have been played
    • Audio streaming sources and subscription channels (Spotify, Pandora, Apple Music, etc)
    • Songs that are skipped within the first few seconds or played all the way through
    Mitsubishi Electric community DJ diagramWith this information, we can train a recommender system with genres, artists, music sources, and preferences for novelty versus consistency. Ultimately, we build a Venn diagram of musical interests for everyone in the car, and play songs from the intersection that we predict all passengers can enjoy.

    How might this work? Let's say I love country music and my friend loves classic rock. Our Roadtrip DJ could create a playlist heavy on Neil Young, the Eagles, Shania Twain, the Band, and CCR. Too easy? How about a son who loves heavy metal and a dad who loves classical music – our Roadtrip DJ could play Yngwie Malmsteen with the Japanese Philharmonic Orchestra, Metallica’s S&M, or Rammstein’s XXI Klavier.

    We've also been thinking about how to make the solo driving experience more enjoyable through music as well. Much like the above scenario, we can acquire contextual information about the driver and the drive, such as:
    • Trip departure points and destinations
    • The driving situation – in high traffic, on a long drive or an empty road late at night
    • The driver’s emotional state
    • Special dates – holidays, birthdays, celebrations, vacations, anniversaries
    • Busy times – when there are lots of calendar meetings or incoming calls
    Synthesizing all of this information, we can play music that's highly personalized for different needs, like a relaxing selection before a big meeting, an energized playlist for a long drive, or a fun mix before a party.

    What kind of DJ would we imagine for our autonomous future? We've got some great ideas for that too – but you'll have to wait for my next blog for the big reveal.


    Jacek Spiewla     Jacek Spiewla
    User-Experience Manager, Advanced Development

    Go comment!
  • Automotive's Strange Changes

    | Sep 20, 2018

    Software is bringing strange changes to the field of automotive.

    Not because cars are increasingly defined by and differentiated through their software – we've been adjusting to that for over a decade - or because software has development cycles in months rather than years. What I'm talking about is software's driving need for cybersecurity, which is a force that is bringing unity to the automotive ecosystem.

    Everyone – the consumer, automaker, and supplier – is negatively impacted by vehicles being hacked. It's not just that one model that's affected; the trust in an entire ecosystem can be shaken. Good cybersecurity cannot afford to be a market differentiator; it's a required baseline.

    As numerous other industries have shown, cybersecurity measures are most effective when they're cooperative. Vulnerabilities need to be communicated to others in the ecosystem as soon as they happen. Techniques and tools should be shared so everyone can use best development practices, proper data handling, and the latest patched libraries. And while cooperation isn't something that comes naturally to automotive, we've been making amazing strides.


    Auto-ISAC September 25 2018

    That's why I'm proud to have been elected by the Auto-ISAC board of directors as the Auto-ISAC 2018 Summit Chair. In that position, I've been responsible for planning and organizing this year’s event. I'll be providing a brief welcoming address to kick off the conference, inviting OEMs, tier ones, cybersecurity providers, and government agencies to the stage. There promises to be a plethora of collaborative discussions on legislative and regulatory policy, incident response and vulnerability management, and building a future-cybersecurity workforce. (If you haven't signed up yet, there's still time!)

    If it seems like one big family, that's because it feels like it. Within the cybersecurity space, I can sit side-by-side with my colleagues from competitors like Bosch, Lear, and Continental, and customers like GM, Ford, and FCA – all working together on serious issues and how to mitigate them.

    Everyone's passionate about the same goal: making cars safe. And that spirit of cooperation is a change for the better.

    Kristie Pfosi     Kristie Pfosi
    Senior Manager, Automotive Cyber Security

    Go comment!
  • Enabling the in-car cocktail party

    | May 09, 2018

    It's not what you think - we're not talking about drinking and driving! Creating an automotive speech recognition system that works well under all conditions has always been a very challenging proposition. The car’s acoustic environment has lots of loud, non-predictable sounds that compete with the driver’s voice – like wind noise, road noise, and traffic. You may be able to guess another big noise source from inside the car, especially if you’re a parent – the other occupants. Whether it’s children, companions, or colleagues, it’s not always easy to stop all chatter just so the car has a chance of recognizing what you say to it. Recognizing one speaker in a crowd is the so-called “cocktail party problem,” and it’s been a very difficult one for computers to solve, especially within the car.

    Friends talking in carThis is why we’ve developed technology that can distinguish and separate multiple simultaneous voices, making the vehicle’s speech recognition more robust, useful and accurate. Allowing the car to recognize hands-free commands without requiring other conversations to stop provides more natural speech interaction and a better user experience for everyone in the car. And since this solution only needs a single microphone, it doesn’t introduce additional hardware expense to the car. Great – but according to the US Census Bureau’s latest data, over 76 percent of commuters drive alone and therefore don’t need to worry about competing voices.

    Yes, but that’s today. In a few short years, perhaps even within one or two traditional automotive design cycles, mobility will be drastically transformed. Mobility as a Service (MaaS) is taking off and promises the ability for people to dynamically choose their mode of transport, with on-demand vehicle subscriptions and multiple car- or ride-sharing models.  Far from removing the need to talk to your car, we believe self-driving technology will result in even more opportunity to talk to, direct, and control your car. The future is multi-passenger – and conversational.

    Digital assistants like Amazon Alexa and Google Assistant are the hottest thing since sliced bread. With greater exposure, reliability, and comfort in a digital assistant, people will be increasingly relying on them to perform tasks in the car – tasks that don't rely on their eyes or hands.  That’s why we don’t just need digital assistants in our cars, we need ones that can listen to each one of us, even when we’re all talking.


    Jacek Spiewla     Jacek Spiewla
    User-Experience Manager, Advanced Development

    Go comment!
  • Speaking to Smarter Cars

    | May 03, 2018

    If you or I were eavesdropping on someone’s conversation we’d probably be able to tell if the speaker was a man or a woman, if they were young or old, maybe even where they lived or their first language. Computers, on the other hand, have no such ability; in their attempts to understand human speech they often overlook contextual details and user characteristics that people naturally absorb.


    Speech recognition is one of the most challenging areas of computer science. While improvements have been slow to materialize, the technology is finally beginning to become a useful tool – as anyone who uses a virtual assistant can tell you. That said, the next major challenge in computerized speech recognition is understanding the contextual details of speech and the personal characteristics of those speaking.


    What would this advancement mean for cars? We believe it would be a solid building block for improving the in-car user experience. Let’s look at a couple of cases where a car can help its users once it can “listen between the lines.


    • JuanJuan asks his car to “Enciende la radio”. Since the car doesn’t understand that command, it checks it against a few different languages in its library and determines that the request is in Spanish. It then turns on the radio as requested and automatically switches the infotainment system, speech recognition, and instrument cluster settings to Spanish.


    • Olivia uses voice recognition to authorize herself as a new driver of her parent’s sedan. The car recognizes her and auto-sets the “safe-driving” profile her parents have configured for her. Because the car recognizes her voice as that of a young woman, it fine-tunes the speech recognition models using a higher pitch for better accuracy while switching the default satellite radio presets from classic rock to electronic dance music.


    • Josephine flies from Montreal to Atlanta and starts up her rental car. The car detects her French accent and prompts her to see if she would like to switch to French. As Josephine is very comfortable in English, she replies “No”. The car then asks if she would prefer metric measurements and she readily answers “Yes”, so the car switches its units to metric.


    • As a Brooklyn native, Tony has never before owned a car. His brand-new car detects his New York accent and asks if he would like to enable the “urban native” features – prompted street parking warnings, parking lot pricing display, and automatic traffic avoidance suggestions. Tony knows the city but not how to navigate it by car, so he enables them right away. 

    These are just a few examples of how the determination of voice characteristics can help improve the user experience in a car. We at Mitsubishi Electric are looking into this along with how voice context can be merged with information from social networks to let the car guide preferences in music, shopping, restaurants, or even the in-car environment. With some appropriate smarts, the car can be an amazing accompaniment to a predictably perfect experience.

    Jacek Spiewla     Jacek Spiewla
    User-Experience Manager, Advanced Development

    Go comment!
  • Let It Snow

    | Feb 16, 2018
    Snowy winter driving

    If you live in the Snow Belt, you might be tempted to take autonomous driving technology with a dose of (road) salt. You’ve driven in winter and you know what it’s about. Sometimes the road is covered with black ice or snow flies off another car’s roof, or chunks of ice litter the road – sometimes it’s hard to see the road at all making you wonder how today’s ADAS features function in inclement weather. You may even have fond memories as a young driver practicing your winter driving skills in an empty snow-covered parking lot with Mom or Dad. If you do, you know that self-driving cars are going to need their own snowy driving drills before they are up for the task.


    That’s exactly what we’re doing with our self-driving technology at Mitsubishi Electric. Our high-precision mapping technology allows us to always know where the road is – even if it’s covered in a white blanket of snow. This is crucial for self-driving cars in winter and has been a major stumbling block for the industry to date. We’ve already tested it with our own “parking lot” test and road tests are currently in progress.


    If you’ve been watching the PyeongChang Winter Olympics like I have, it’s hard not to be astounded at the world’s best athletes shredding the slopes. For the more adventurous among you, those amazing skiing and boarding skills might have awakened the urge to hit the hill a bit yourself. Consider this: By the time the 2022 Winter Olympics start in Beijing, your car will be able to take you to your local ski resort and back – all by itself.

    Gareth Williams     Gareth Williams
    Executive Director of Advanced Development

    Go comment!
  • Portrait Displays and the 2019 RAM 1500

    | Jan 17, 2018
    CNET 2019 RAM 1500 Infotainment

    If you had a chance to see some of the more exciting announcements coming out of the Detroit Auto Show (aka NAIAS or the North American International Auto Show), you may have seen the new 2019 RAM 1500. If you did, I’m sure you didn’t miss the beautiful new 12” portrait-mode uConnect infotainment system. Here’s a glowing review from CNET RoadShow

    2014 Cadillac CTS demo with FlexConnect


    In that video, the reviewer Antuan Goodwin compares the RAM 1500 display to one in the Tesla Model S. That particular comment got us thinking – at Mitsubishi Electric, we’ve been doing huge portrait displays for a long time now. We were building portrait mode infotainment systems with our FlexConnect platform before the Model S made luxurious portrait displays the hot new thing. Here’s a 2014 Cadillac CTS outfitted with a portrait FlexConnect (conveniently parked in a cafeteria for our press event that year).




    FCA Police Charger w/ FlexConnect

    We think portrait mode displays are the future – and Praveen Chandrasekar at Frost & Sullivan agrees in his piece about the Volvo XC90 display. As another example, we supply the mobile command center for a different FCA product we can talk about: the Law Enforcement Dodge Charger. 


    We love the 2019 RAM 1500 infotainment system, and we’re really happy it’s getting rave reviews. We’ve always loved trucks, and we’re glad they’re getting some long overdue attention as one of the coolest cockpits at NAIAS.


    Gareth Williams     Gareth Williams
    Executive Director of Advanced Development

    Go comment!
  • CES 2018 - Day three highlights: The battle of the digital assistant

    | Jan 12, 2018

    CES 2018 bannerMoving out of the automotive-centric CES north hall into the south and central halls, one thing becomes clear. This is the year for digital assistants – primarily Amazon Alexa and Google Assistant – and they’re fighting for dominance in the technology landscape. Voice technology has finally reached the point where it’s usable by the general public without the hassle and pain of earlier solutions. For the car, that’s great news as voice assistants are more than just a cool feature – they can actually improve safety while increasing functionality and productivity.

    Hey Google Jeep dialog @ CES 2018

    Amazon is ahead in the number of announced integrations and they seem to be working hard to be an easy integration partner. Alexa-based rollouts in the works now include Toyota and Lexus, with previously announced members Ford, Volkswagen, Hyundai, and Volvo. Alexa has also made a big bid in the home, with ecosystem partners on the computer side like Acer, Asus and HP, as well as appliance manufacturers like Whirlpool and Kohler. As we see more home/car integrations, this ecosystem dominance will strongly tie the two together. We can immediately think of several use cases.


    This isn’t to say that Google is standing still. Partners include Honda, Hyundai, GM, and Kia. In the home, they’re going into LG ThinQ, JBL speakers, and Sony TV. Google is clearly making a push to try to unseat Amazon’s market lead.

    Some products incorporate both Google and Amazon digital assistants. Will they both be active simultaneously, dependent on keyword, or will they be enabled exclusively? Time will tell but we may end up seeing a couple of different approaches to handling multiple digital assistants.

    Of course, CES’s new auto-related tech isn’t all voice assistants. LIDAR companies claiming the fastest, cheapest, smallest, or otherwise best technology – from Velodyne, Quanergy, AEye, Luminar, LeddarTech, and Innoviz – are all jockeying for a position in a market that’s about to explode.

    Yamaha drone

    And while there are a huge number of drone companies with use cases from photography, security patrols, emergency supply delivery, and medical assistance, curiously none of them are showing automotive use cases like we’re demoing in our booth.

    It’s been a packed three days – it’s almost time to look forward to next year!


    Gareth Williams     Gareth Williams
    Executive Director of Advanced Development

    Go comment!
  • CES 2018 – Day two highlights: Bringing it home

    | Jan 11, 2018
    Ford 'living street' @ CES2018We’ve been saying it for some time and it’s our theme for the show: Automakers no longer want to be defined by the traditional (and narrow) definition of automotive but want to be seen as mobility providers. Moreover, they want to influence not only how we commute but also our lifestyle. Nowhere was this more evident on the CES show floor than with Ford. This Motorcity giant has an impressive vision for the connected city. Their booth is in fact a “living street”, complete with self-driving delivery car that can cut down on traffic and parking hassles by allowing groceries, dry cleaners, and other businesses to share delivery vehicles. This makes more room for green living spaces – and people. Definitely worth a visit.


    Jeep made a solid lifestyle play on their booth with a Home-to-Car demo that features Amazon Alexa and Hey Google. Owners of the 2018 Jeep Cherokee Latitude with the optional “Tech Connect” package can now ask their favorite digital assistant to remotely start and stop the engine, lock and unlock the doors, monitor vehicle vitals, and more.


    Honda autonomous ATV @ CES2018Honda rules the roost with robotics but another standout is their autonomous ATV, built on their existing rugged ATV chassis, and with heavy-duty rails on top to accept multiple different accessories. It’s not only interesting for things like fire and rescue, and construction, but Honda also imagines this for personal use. Want extra help around the ranch – feeding cattle or cutting weeds? What about plowing the snow out of your driveway, hauling rocks, or raking leaves? Program your rugged D18 to take care of it.


    Mercedes BMUX @ CES2018Mercedes had some very cool steering-wheel haptic HMIs that showed off their new MBUX user interface with natural language integration – CNET has a great detailed review. It’s almost hard to look at the HMI as it’s right next to their gorgeous AMG electric supercar – 0-200 km/h in under 6 seconds – but we managed and so should you.


    Although things continue to be extremely busy on the Mitsubishi Electric booth, we hope to get further afield tomorrow to check out the drone and robotics technology. Once again, stay tuned!


    Gareth Williams     Gareth Williams
    Executive Director of Advanced Development

    Go comment!
  • CES 2018 – Day one highlights: Industry synergy

    | Jan 10, 2018
    We spent most of our time in the North Hall on day one where the majority of automotive tech is focused – sandwiched in-between other meetings, of course. What did we see?


    Toyota 'beloved intelligence' @ CES2018As expected, AI and autonomous were focal points for many. Toyota showed their “beloved intelligence” concept car – a vehicle that learns what its owner wants and tries to deliver more of the same, much like a favorite pooch. Kia also had something similar in their Niro EV– watching drivers’ faces to predict their music and playlists. Both of these concepts are things we’ve discussed as predictive HMIs (and of which we’re demoing our own flavor). Something we hadn’t seen before: Nissan and their “mind reading” technology. Like a Muse headband but for the car; the idea is to make the car a partner in the driving activity rather than a replacement by predicting early on what drivers might do and helping the car take action or warn of trouble situations. Definitely intriguing use of wearable technology but it’ll be interesting to see if this has any legs when it comes to consumer usability.


    Byton @ CES2018EV continues to gain traction; every automaker we saw had more EV models in the pipeline – Kia announced 16 new EV models by 2025 – and some like Hyundai are even looking at hydrogen fuel cells as an alternative option.  Chargepoint had a great booth showcasing their latest charging and power stations. And taking the place of last year’s buzz around Faraday Future was the buzz over Byton, another Chinese-funded, cleanly designed and incredibly sexy EV vehicle. It’s the next competitor in a long line (like the Karma, NextEV, Lucid, etc) that is attempting to unseat Tesla. Will they be successful? Check back next year to see.


    Notable too was the surge of 5G related technology and announcements –Qualcomm, Badiu, Verizon, and Nokia are all throwing in their hats. Think fully mobile network with WiFi data speeds and you’ll sense the excitement. Qualcomm also discussed C-V2X, or cellular V2X – using car-to-car LTE for fast and low-latency in a way that could finally make V2X a reasonable proposition. We’ll have to wait for the network operators to make it ubiquitous before the automakers jump on board but they’ve already started dipping in their toes … it won’t be long.


    Hyundai home @ CES2018Finally, we saw automakers stretching their market muscle into non-automotive applications of their technology like Hyundai with their hydrogen powered home or Toyota with their personal mobility “scooter”. We’ll expect to be seeing a lot more of this trend as soon as we get the chance to cover more of the floor! Stay tuned.


    Gareth Williams     Gareth Williams
    Executive Director of Advanced Development

    Go comment!
  • Autonomous reality check: The pressing need for highly accurate maps

    | Jan 09, 2018
    Five years ago, the most sophisticated in-car navigation system displayed roads using GPS to locate a vehicle within five yards, providing fairly reliable turn-by-turn driving directions. However, the system could be wrong by 50 yards in densely populated areas with urban canyons, and fail completely in tunnels. Too many of us have horror stories of ending up in a sand pit instead of a campground or someone’s driveway instead of a parking lot. The proliferation of Google Maps on smartphones somewhat improved the situation in that most maps were updated far more frequently than their in-car cousins but as a GPS-based system, its accuracy in all situations was still unreliable.


    For the coming generation of autonomous vehicles, the convenience of low-precision maps is no longer enough. Centimeter-level accuracy becomes critical.


    A few years ago, some automakers had hoped that autonomous vehicles might be able to position themselves using low-definition maps and high-powered sensors. With clear road markings, visual sensors could keep cars safely within lanes and spot the dotted lines indicative of exits.


    The problem is a fully driverless car needs to operate safely in all environments and under all conditions. LIDAR has an effective range of around 50 yards but that can dwindle significantly in a snowstorm or when other vehicles obscure objects. Even the smartest car travelling on a freeway can only “see” ahead of itself about a second and a half. Self-driving cars need to be able to anticipate turns and junctions far beyond their sensors’ horizons. More importantly perhaps they also need to locate themselves precisely as an error of a couple of yards could place a driverless car in oncoming traffic.


    That’s why we’re working to help advance autonomous technology on two fronts: building highly accurate maps and building highly accurate position determination into cars.


    Mitsubishi Electric MMS scanning systemTo create high-precision 3D maps, we’ve developed the Mobile Mapping System (MMS), a self-contained sensor-studded platform with multiple cameras, LIDAR, GNSS receivers, and processors mounted on a vehicle. The system scans a road segment to create a comprehensive and highly accurate digital representation of that road – one that is used for both training autonomous systems as well as for creating extremely accurate maps.

    For high-accuracy position determination, we provide a number of technologies that are integrated into the car’s sensor network. We combine inputs from several satellite positioning systems to improve the traditional accuracy that comes from using only one. (Our experience in satellites includes the latest satellite positioning network sponsored by the Japanese government.) We also have technology that improves the accuracy of standard dead reckoning systems – augmenting wheel tick sensors and rough directionality with camera images that track road position and speed – resulting in far better position estimations in the absence of satellite signals.

    We’re working closely with industry mapping experts to further augment our technology. Visit our website for more information and then drop by our booth at CES to learn more.



    Mark Rakoski

    Vice President of Sales, Engineering, and R&D Global Business Development

    Go comment!
  • Harmonizing the home and the hatchback

    | Jan 05, 2018

    The smart home market is growing almost as rapidly as the connected car market but so far the two have yet to fully converge and what their consolidated future will look like is still up for grabs. When most people envision the connection between car and home, they often think of integrated media, navigation, or energy management. We at Mitsubishi Electric think of much more. In fact, we see cars working in harmony with smart homes in a way that offers exciting new conveniences and cost savings in a seamless and intuitive manner.

    Car-to-home integration
    Predictive integration
    It’s easy enough to imagine a driver manually using a vehicle interface to turn up their home’s heating on the way home from work. Or using a smart home assistant to check on the car’s charge status or fuel level. But imagine the benefits that could result from the car and home working together on their occupants’ behalf without continuous human intervention, using a blend of home-to-car connectivity and predictive HMI technology.

    For example, the car could automatically notify the home at the end of the work day as it gets progressively closer so that the home could bring its temperature up or down according to owner preferences and time of year. Once the car signals to the home that it’s pulling into the driveway, the home could turn on the lights for a welcoming arrival. Conversely, as the last occupant leaves for the day, the car could alert the home that everyone was gone, asking it to adjust the temperature, turn off the lights, lock the door, and arm the security system.

    Voice-activated integration
    Digital assistants with conversational speech interfaces can provide another key technology to bridge the car and home. Market researcher Ovum forecasts the digital-assistant market will grow from 3.6 billion in 2016 to 7.5 billion by 2021. This means there will be almost as many digital assistants in 2021 as there are humans on the planet today, creating a tremendous opportunity for the automotive industry.

    Virtual assistants like Amazon Alexa, Apple Siri, or Microsoft Cortana could provide a sense of delight that keeps car ownership sexy. Imagine your customer’s surprise the very first time their assistant asks, “I see you have an appointment across town in an hour; would you like me to warm up the car?”

    New mobility integration
    Home-to-car integration may be just as useful in new mobility scenarios; add in digital assistants and you’ll have some killer applications. For example, vehicle owners in car sharing pools could use a virtual assistant to find out what’s happening with their car – where it is, who’s driving it, who’s in the passenger seat, and how fast it’s going. And, if they need to take their car back for the day, they could find out when it’s expected to complete its current trip and arrive home.

    Mitsubishi Electric has one of the best-integrated infotainment offerings, FLEXConnect.AI, which makes it perfectly positioned for developing car-to-home and home-to-car use cases. We’re also experts at virtual assistant integration. We’ll be showing some innovative ways to take advantage of both trends in our CES 2018 demo car. Visit our CES web page for more information and then make an appointment to see us next week in Vegas.

    Jacek Spiewla     Jacek Spiewla
    User-Experience Manager, Advanced Development

    Go comment!
  • Biometrics: The key to your car

    | Jan 03, 2018

    Advanced technology, which makes cars harder to steal, also makes car keys harder to copy. So while losing a car key has never been fun, these days it’s an expensive proposition that most consumers would rather avoid. One solution is to let wearables act as car keys. With at least a quarter of American adults owning a wearable and estimates on wearable adoption continuing to rise, this may be the ideal way for cars to recognize and authenticate their owners. At Mitsubishi Electric, we’ve been investigating a biometric wearable strategy for vehicle authentication and have come up with the following ideas we’d like to share.


    Lifestyle branding

    Car-aware wearables could provide automakers with the perfect way to extend a premium brand onto an owner’s wrist. A Cadillac, Lexus, Mercedes, or Porsche wrist strap would allow consumers to enter their cars while acting as an extension of those same brand qualities into other mobility spaces. Rather than a single-purpose widget, an OEM-branded biometric wearable could interface with a smart home, favorite apps, personal devices, or enterprise equipment, giving OEMs a much broader mobility presence. This capability could also take advantage of consumers’ existing wearables with branded applications on FitBit, Apple Watch, or Android Wear.


    Car-sharing convenience

    Biometric wearable unlocking carBiometric wearables could make a great accessory for car sharing. By authenticating the owner through their biometric identification, a car sharing service could guarantee proper access to whatever vehicles an individual is allowed to use without needing to worry about pass codes or physical keys. That freedom could extend to car subscription models where someone swaps cars on a dealer lot at will, with a person’s biometric data providing the link to an online account, payment, and insurance details.


    In addition to replacing the physical key, car rentals, car sharing, and ride sharing would all benefit from the car’s ability to confidently and uniquely recognize an individual through personalization. Biometric identification would allow a car to automatically access and provide many convenience features like smartphone pairing, calendar syncing, and routing to preset locations from either personal mobile or cloud repositories. People’s preferred traveling identities would follow them wherever they go without worry about smartphone loss or theft.


    Remote control

    With a biometric wearable and a smart watch, people would have the ability to remotely control their car from anywhere with more identity assurance than a physical key fob. Your customer could roll down the windows and unlock the car for the kids while still inside packing lunches. Or they could open the trunk to let the neighbor borrow a scissor jack while they’re on vacation.


    Increased security

    Accessing cars via biometric prints means that consumers can’t lose their “keys”. It also means that someone can’t steal someone else’s key and thus their car without the appropriate and unique fingerprint, iris, voice, or ECG. Biometric security also makes it far simpler for fleet or business owners to provision vehicles since transferring cars could take place via a few mouse clicks instead of a cumbersome exchange of key fobs.


    Improved UX
    Biometrics could identify everyone in the vehicle, not just the owner (or whomever’s got the key fob). This fine-grained information about a car’s occupants could provide a much improved experience, from individual mobile payments to per-seat personalization. With individual identification, the infotainment system could automatically adapt to the driver and front-seat passenger, and rear-seat entertainment could show everyone’s favorite shows.


    These are just some of the reasons we think biometrics will gain traction within automotive. We’ll be showcasing a subset of these use cases on our booth at CES 2018 – be sure to make an appointment to see them and our other innovations.

    Gareth Williams     Gareth Williams
    Executive Director of Advanced Development

    Go comment!
  • Augmented-reality: Amping up personalization and car sharing

    | Dec 20, 2017

    Imagine this: A city street crowded with cars and a sidewalk teeming with dozens of people, all looking at their phones. Except these people aren’t looking down at their phones, isolated in their personal conversations. Instead, they’re holding their phones up at street level, using them as augmented reality (AR) portals, viewing cars on the street. What do they see?

    Yellow car with AR car sharing tags
    It’s an interesting thought experiment, one that we’ll be answering in our demo car at CES 2018. The concept of having cars with an AR presence may seem like a fun concept without a lot of practical use. At Mitsubishi Electric we’ve imagined both some fun as well as serious and imminently practical uses of how an AR portal could be used to delight your customers.

    • Personalization – At the playful end of the spectrum, people could dictate a text message that hovers over the vehicle as a refreshable bumper sticker, broadcasting a person’s passions. This could also be a graphical image, letting creative people take car customization to the next level and graffiti artists tag their own transportation.

    • Family car – For a shared family car, a private AR message could be anything from “Low on gas, sorry Dad!” to “Pick up Emma at ballet @ 2pm”. This type of messaging becomes even more effective if the smartphone is also used to unlock and start the car.

    • Taxi – Taxis could show their availability status – as well as their rates – in big bubbles floating over their cars. As soon as someone steps in for a ride, the bubble would disappear. This could make it super easy to identify which taxis are in service, even from some distance away.

    • Car sharing – For car-sharing services like Zipcar, car2go, or Turo, a bubble over a shared car could advertise its availability, hours, prices, pick-up points, or any number of useful things. Instead of sharable cars only being advertised online, they’d be promoting themselves everywhere they go.

    • Lyft pickups – It’s a common scenario: you’re scouting for your Uber or Lyft car that says it’s arrived … only you can’t find it. Ride shares could show big tags with customer names – visible only to the relevant customers – letting people use their phones to scan pickup points and easily find their rides.

    These are just some of the concepts we’ve got for mixing AR and cars based on our many years of experience in infotainment and UX. We think this is a rich new way to let drivers communicate to others and we’ve got a prototype that shows it working. Make an appointment to stop by our CES booth in January 2018 and we’ll show it to you in action!

    Gareth Williams     Gareth Williams
    Executive Director of Advanced Development

    Go comment!
  • Changing the way we pay

    | Dec 12, 2017

    Are consumers ready to put their wallets and their smartphones aside, and use their cars to make everyday purchases? While most people still feel best using a credit or debit card (or even cash), this scenario is not as far off as some may think.

    “Mobile payments” is a big term that can mean anything from buying something in person or sending someone an e-check. Most people are familiar with the point of sale payment commonly known as tap to pay. Lots of stores have systems in place and some retailers like fast food, gas stations, coffee shops, etc have what’s called closed loop mobile payments, which are company specific. Similarly, many people have either used PayPal, Google Wallet, or Apple Pay on their phones or know someone who has. 

    Buying coffee with car+mobile payments

    As cars increasingly become smartphones on wheels, it’s logical if not inevitable that they will soon be taking care of purchases too. In-vehicle payments make a lot of sense – who wants to fumble with a credit card to pay for gas in the middle of a snowstorm? It’s easy to envision other scenarios as well: drive- thru windows, parking lots, car washes, etc. But the benefits of in-vehicle payments don’t stop there.

    Cars could be used as trusted payment providers to consolidate individual orders and payments, making complex transactions trivial. For example, a car could be used in a drive-thru situation where each person has an individual order. It could also be used to fairly distribute payments for transportation-related activities that carry an associated cost (like toll roads or bridges, gas, parking, etc).

    How would this work? The car creates a trust fabric – a number of nodes that reciprocally trust each other – from individually associated devices. Those devices are smart phones or wearables (biometric devices) – things that are able to uniquely identify a person with a guarantee of authenticity. The trust fabric allows payment systems access to selected attributes (such as payment methods) of the authorized individuals within the car. The car basically acts as a broker to external entities, making a single consolidated order and payment from several individual transactions.

    Imagine placing an in-vehicle coffee order for four. The driver and front passenger use the IVI system while the rear-seat passengers use their phones or an RSE interface. The car locates the nearest retail store and plots a route to it while sending in the complete order. It also exchanges payment information and informs the store of the car’s anticipated arrival time. Once the car pulls up to the drive-thru, it communicates its trusted ID to the coffee shop's order and payment system – which lets the person at the window verify the order, assemble it, and serve it – all without a coin or card having been exchanged, making the whole experience more pleasant while saving the driver time.

    This is the type of value-added service that FLEXConnect.AI makes possible. We’re demonstrating this and other innovations at CES in just over a month. Be sure to book an appointment to come see us if you’re going to be at the show. 


    Mohammad Horani     Mohammad Horani
    Manager, Advanced Development

    Go comment!
  • The Silicon Valet

    | Dec 05, 2017

    In Iron Man, Tony Stark is reliant on Jarvis, his holographic, always-available virtual assistant on everything from passcodes to propulsion. What makes an AI like Jarvis so amazingly handy isn’t just its capabilities but its ability to predict what Tony needs – most of the time even before his needs are expressed.

    Your robot assistant is coming soonWhile today's car isn’t going to help build a fist-size nuclear fusion reactor anytime soon, it certainly could be a bit more helpful than it is right now. What if, like Jarvis, a car could predict people's needs and offer helpful, timely suggestions or subtly change its behavior to anticipate them? What if a car could save time in accomplishing a driver's most frequent and repetitive tasks? We’re exploring this in our latest CES demo car by extending our FLEXConnect.AI infotainment system with the concept of a predictive HMI (human-machine interface). What might this look like?

    • Your car looks at your linked phone and sees you’ve got “Pick up eyeglasses” on your to-do list. When you step in the car, the car offers, “I see you need to pick up your eyeglasses. Dr. Scott’s office opens tomorrow morning at 7:30 AM, so you’d have plenty of time to pick them up before your first meeting at 10:00 AM. Would you like me to remind you tomorrow?” When you say yes, the car alerts you in the morning before work and routes you to the optometrist avoiding the morning rush-hour traffic.
    • Friday is the family’s pizza and movie night, and your car notices that every Friday evening you seem to head over to Newport Pizza. On your way home from work the next Friday, it asks, “Would you like to get your standard pizza from Newport’s for a 6:30 PM pick-up as usual? I can also dial them for you if you’d like a different pizza tonight.”
    • Your car sees that you almost always prefer to roll down the windows instead of using climate control. It also notices that you usually listen to news radio during the morning commute but your iPhone music at other times. Accordingly, it reconfigures a single programmable display dashboard button to show “Heat” when it’s below 45 degrees outside, “Cool” when above 85 degrees, “WJR” in the morning, and “iPhone” for afternoon or evening.
    • Your car tells you “It’s Anna’s birthday tomorrow. Would you like to get her a gift? There a few stores within ten minutes of here that have items from her wish list.”
    • You’re on a long road trip, and the car notices that your head is nodding a bit. “You seem a bit sleepy. Would you like a coffee? There is a Starbucks, a Tim Horton’s, and a McDonald's all within five minutes of here.”

    Cars currently have an amazing amount of information available to them, from linked devices and cloud accounts, to driving history. With the power of big data and machine learning, they can correlate all this data, learning over time what discrete events occur in people's lives and building relationships between events to understand what things drivers might want. Like an online product recommendation, a predictive HMI can also compare patterns to those of others to help understand what to expect next.

    For predictions to be unobtrusive and improve over time, an HMI would need to get feedback about its suggestions. This allows the HMI to explain its reasoning of why it is offering its suggestions in the first place, and to correct for future circumstances. Here’s a dialog illustrating this point.

    Car: “Do you want to call Mike?”
    Driver: “Why?”
    Car: “Because he’s left two voice mails.”
    Driver: “No thanks!”
    Car: “No problem. Should I delete those voice mails?”
    Driver: “Sure.”
    Car: “Done.”

    Even with all the data in the world, a super-predictive HMI won’t always get everything right – just like a human assistant. But making it a smart, adaptable, and natural user experience – whether through a touch screen or a voice-based virtual assistant – allows it to become an indispensable part of tomorrow’s mobility solutions.

    As mentioned, we’ll be showing innovative ways to exploit this predictive HMI technology in our CES 2018 demo car. Visit our CES web page for more information on this and other innovations that we'll be showing in our booth. And don’t forget to make an appointment - times are filling up.

    Voratima Orawannukul     Voratima Orawannukul
    User-Experience Lead Engineer, Advanced Development

    Go comment!
  • The Crisis of Cool – and Hybrid Haptics

    | Nov 29, 2017

    A modern user interface communicates through subtle shadings, gestures, interactivity, and animation. While a rich, visual design language is in demand for phones and desktops, it’s not yet part of the automotive experience. Driver distraction concerns over the last several years have lead to today’s simplified textures, high-contrast fonts, and static displays that fly in the face of the current UX paradigm, resulting in a stodgy, dated interface.


    What can a car UX designer do to make cars cool yet still keep them safe? One approach is to use what we at Mitsubishi Electric call hybrid haptics: touch screens with tactile components.


    A hybrid haptic begins with a physical knob, button, dial, or other tangible control. Whether the control is built into a screen bezel, attached directly onto a screen, or nomadically placed on various surfaces, a physical mechanism for a digital control takes advantage of our tactile nature. (Touch is the first sense we develop as infants and remains critical to our health and happiness throughout our lives.) By exposing select features through a fixed button or knob, a hybrid haptic lets users find frequently used functions instead of searching for them in levels of menus.


    However, a hybrid haptic is much more than just a traditional push-button or dial – it’s a physical control enriched with a touch screen (which can be either beside, underneath, or on top of the control) that displays its function, action, and state, and also allows for the control to be changed as circumstances dictate. Let’s look at a few examples.


    Microsoft Surface Dial

    Probably one of the more intriguing applications of hybrid haptics is the Microsoft Surface Dial. While intended for consumer devices – rather than automotive – this handy little gadget shows what’s possible with the creative use of a number of coordinating technologies. The freely placed Surface Dial connects to a tablet through Bluetooth, senses where it’s placed, and interacts with applications to dynamically control content. For example, while working in a drawing application, a digital artist can rotate the dial and select a color rather than navigate through floating tool bars to make a color selection. While the technology is definitely cool, you’re probably not going to see arbitrarily-placed dials around the interior of a car anytime soon.


    Mitsubishi Electric hybrid hapticsOur next example is strictly automotive – it’s a Mitsubishi Electric proof-of-concept in a Jaguar F-Pace demo car. To illustrate how hybrid haptics can make the car UX both more tactile and exciting as well as less distracting to the driver, we’ve mounted two large dials directly on the car’s touch screen. These dials give a user control of common rotary functions – such as volume control, temperature adjustment, radio preset selection, and map zoom,– through simple voice commands. What’s more, the dials can be reconfigured on demand as the user’s needs change. Because the dials are mounted on the screen, the user still gets the physical sensation of turning them to directly engage with content - and because it's a touch screen, that content can dynamically change as needed.

    Art Lebedev Optimus Aux keyboard

    Our last example is an OLED switch; this particular example is from the Art Lebedev Optimus Aux keyboard. These cool little devices are push buttons that have miniaturized screens on their faces. A handful of these rich graphical buttons could really liven up a center stack display while adapting to the user’s whim or the current car context by showing new functionality when it’s needed. Unfortunately we don’t (yet) have a real-life example of these in use in automotive. 


    With hybrid haptics, you can really increase the cool factor, decrease the driver’s mental workload, and create an overall superior user experience. What’s not to like? To see these innovations in person, make an appointment to see us at CES 2018  in January.

    Jacek Spiewla     Jacek Spiewla
    User-Experience Manager, Advanced Development

    Go comment!