Amazon Alexa’s users should be aware of a new vulnerability in the voice assistant skill vetting process.
Vulnerabilities in Alexa Skill Ecosystem
The loopholes could allow threat actors to publish a deceptive skill under any arbitrary developer name. They could even apply changes in the backend code after approval to trick users into revealing sensitive details.
The research was conducted by a group of academics from Ruhr-Universität Bochum and the North Carolina State University. During their work, the researchers analyzed 90,194 skills available in seven countries, such as the US, the UK, Australia, Canada, Germany, Japan, and France. Their findings were presented during the Network and Distributed System Security Symposium (NDSS) conference.
“Amazon’s voice-based assistant, Alexa, enables users to directly interact with various web services through natural language dialogues. It provides developers with the option to create third-party applications (known as Skills) to run on top of Alexa. While such applications ease users’ interaction with smart devices and bolster a number of additional services, they also raise security and privacy concerns due to the personal setting they operate in,” the research paper explains.
Given the widespread adoption of Alexa and the potential for malicious actors to misuse skills, the goal of this paper is to perform a systematic analysis of the Alexa skill ecosystem and identify potential loopholes that can be exploited by malicious actors.
The team is mostly concerned about the fact that users can activate a wrong skill, which could lead to malicious outcomes, if the said skill was designed with such purposes. Furthermore, multiple skills can utilize the same invocation phrase. The analysis uncovered 9,948 skills that shared the same invocation with at least one other skill. Only 36,055 skills used a unique invocation name, the report says.
Since Amazon’s criteria to auto-enable a specific skill among multiple skills with the same name is not known, activating the wrong skills is possible. What is more, threat actors could publish skills using the names of well-known companies.
This gap could lead to phishing attacks, as the researchers explain it:
This primarily happens because Amazon currently does not employ any automated approach to detect infringements for the use of third-party trademarks, and depends on manual vetting to catch such malevolent attempts which are prone to human error. As a result users might become exposed to phishing attacks launched by an attacker.
This threat is similar to a technique known as versioning, utilized to bypass verification protections. Versioning means submitting a legitimate app to an app store, and then gradually replacing it with malicious functionality through updates.
If you’re interested in the full technical disclosure of the research, you can read the entire Alexa Skill Ecosystem report.
Not the first time Amazon skills were abused in cyberattacks
Last year, Alexa was hacked successfully. Security researchers at Checkpoint discovered that specific Amazon/Alexa subdomains were vulnerable to Cross-Origin Resource Sharing (CORS) misconfiguration and Cross Site Scripting (XSS). It is noteworthy that the attacks could also tamper with the skills added to Alexa.
The vulnerabilities could have allowed an attacker to remove or install skills on the targeted Alexa account, access their voice history and acquire personal information through skill interaction when the user invokes the installed skill.