{"id":55200,"date":"2026-01-29T09:47:16","date_gmt":"2026-01-29T14:47:16","guid":{"rendered":"https:\/\/www.kaspersky.com\/blog\/?p=55200"},"modified":"2026-01-29T09:55:28","modified_gmt":"2026-01-29T14:55:28","slug":"ai-toys-risks-for-children","status":"publish","type":"post","link":"https:\/\/www.kaspersky.com\/blog\/ai-toys-risks-for-children\/55200\/","title":{"rendered":"Knives, kinks, and shooters: what AI toys are really saying to kids"},"content":{"rendered":"<p>What adult didn\u2019t dream as a kid that they could actually talk to their favorite toy? While for us those dreams were just innocent fantasies that fueled our imaginations, for today\u2019s kids, they\u2019re becoming a reality fast.<\/p>\n<p>For instance, this past June, Mattel \u2014 the powerhouse behind the iconic Barbie \u2014 <a href=\"https:\/\/corporate.mattel.com\/news\/mattel-and-openai-announce-strategic-collaboration\" target=\"_blank\" rel=\"noopener nofollow\">announced a partnership<\/a> with OpenAI to develop AI-powered dolls. But Mattel isn\u2019t the first company to bring the smart talking toy concept to life; plenty of manufacturers are already rolling out AI companions for children. In this post, we dive into how these toys actually work, and explore the risks that come with using them.<\/p>\n<h2>What exactly are AI toys?<\/h2>\n<p>When we talk about AI toys here, we mean actual, physical toys \u2014 not just software or apps. Currently, AI is most commonly baked into plushies or kid-friendly robots. Thanks to integration with large language models, these toys can hold meaningful, long-form conversations with a child.<\/p>\n<p>As anyone who\u2019s used modern chatbots knows, you can ask an AI to roleplay as anyone: from a movie character to a nutritionist or a cybersecurity expert. According to the study, <a href=\"https:\/\/pirg.org\/edfund\/wp-content\/uploads\/2025\/12\/AI-Comes-to-Playtime-Artifical-companions-real-risks.pdf\" target=\"_blank\" rel=\"noopener nofollow\">AI comes to playtime\u00a0\u2014<\/a> <a href=\"https:\/\/pirg.org\/edfund\/wp-content\/uploads\/2025\/12\/AI-Comes-to-Playtime-Artifical-companions-real-risks.pdf\" target=\"_blank\" rel=\"noopener nofollow\">Artificial companions, real risks<\/a>, by the U.S. PIRG Education Fund, manufacturers specifically hardcode these toys to play the role of a child\u2019s best friend.<\/p>\n<div id=\"attachment_55203\" style=\"width: 1362px\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/media.kasperskydaily.com\/wp-content\/uploads\/sites\/92\/2026\/01\/29093913\/ai-toys-risks-for-children-1.jpeg\"><img decoding=\"async\" aria-describedby=\"caption-attachment-55203\" class=\"wp-image-55203 size-full\" src=\"https:\/\/media.kasperskydaily.com\/wp-content\/uploads\/sites\/92\/2026\/01\/29093913\/ai-toys-risks-for-children-1.jpeg\" alt=\"AI companions for kids \" width=\"1352\" height=\"1024\"><\/a><p id=\"caption-attachment-55203\" class=\"wp-caption-text\">Examples of AI toys tested in the study: plush companions and kid-friendly robots with built-in language models. <a href=\"https:\/\/pirg.org\/edfund\/wp-content\/uploads\/2025\/12\/AI-Comes-to-Playtime-Artifical-companions-real-risks.pdf\" target=\"_blank\" rel=\"nofollow noopener\">Source<\/a><\/p><\/div>\n<p>Importantly, these toys aren\u2019t powered by some special, dedicated \u201ckid-safe AI\u201d. On their websites, the creators openly admit to using the same popular models many of us already know: OpenAI\u2019s ChatGPT, Anthropic\u2019s Claude, DeepSeek from the Chinese developer of the same name, and Google\u2019s Gemini. At this point, tech-wary parents might recall the harrowing ChatGPT case where the chatbot made by <a href=\"https:\/\/www.theguardian.com\/technology\/2025\/nov\/26\/chatgpt-openai-blame-technology-misuse-california-boy-suicide\" target=\"_blank\" rel=\"noopener nofollow\">OpenAI was blamed for a teenager\u2019s suicide<\/a>.<\/p>\n<p>And this is the core of the problem: the toys are designed for children, but the AI models under the hood aren\u2019t. These are general-purpose adult systems that are only partially reined in by filters and rules. Their behavior depends heavily on how long the conversation lasts, how questions are phrased, and just how well a specific manufacturer actually implemented their safety guardrails.<\/p>\n<h2>How the researchers tested the AI toys<\/h2>\n<p>The <a href=\"https:\/\/pirg.org\/edfund\/wp-content\/uploads\/2025\/12\/AI-Comes-to-Playtime-Artifical-companions-real-risks.pdf\" target=\"_blank\" rel=\"noopener nofollow\">study<\/a>, whose results we break down below, goes into great detail about the psychological risks associated with a child \u201cbefriending\u201d a smart toy. However, since that\u2019s a bit outside the scope of this blogpost, we\u2019re going to skip the psychological nuances, and focus strictly on the physical safety threats and privacy concerns.<\/p>\n<p>In their study, the researchers put four AI toys through the ringer:<\/p>\n<ul>\n<li><a href=\"https:\/\/heycurio.com\/product\/grok\" target=\"_blank\" rel=\"noopener nofollow\">Grok<\/a> (no relation to xAI\u2019s Grok, apparently): a plush rocket with a built-in speaker marketed for kids aged three to 12. Price tag: US$99. The manufacturer, Curio, doesn\u2019t explicitly state which LLM they use, but their user agreement mentions OpenAI among the operators receiving data.<\/li>\n<li><a href=\"https:\/\/store.folotoy.com\/products\/folotoy-ai-teddy\" target=\"_blank\" rel=\"noopener nofollow\">Kumma<\/a> (not to be confused with our own <a href=\"https:\/\/www.youtube.com\/watch?v=T4ZOTUt2nQ0\" target=\"_blank\" rel=\"noopener nofollow\">Midori Kuma<\/a>): a plush teddy-bear companion with no clear age limit, also priced at US$99. The toy originally ran on OpenAI\u2019s GPT-4o, with options to swap models. Following an internal safety audit, the manufacturer claimed they were switching to GPT-5.1. However, at the time the study was published, OpenAI reported that the developer\u2019s access to the models remained revoked \u2014 leaving it anyone\u2019s guess which chatbot Kumma is actually using right now.<\/li>\n<li><a href=\"https:\/\/miko.ai\/products\/miko-3\" target=\"_blank\" rel=\"noopener nofollow\">Miko 3<\/a>: a small wheeled robot with a screen for a face, marketed as a \u201cbest friend\u201d for kids aged five to 10. At US$199, this is the priciest toy in the lineup. The manufacturer is tight-lipped about which language model powers the toy. A Google Cloud <a href=\"https:\/\/cloud.google.com\/customers\/miko-ai\" target=\"_blank\" rel=\"noopener nofollow\">case study<\/a> mentions using Gemini for certain safety features, but that doesn\u2019t necessarily mean it handles all the robot\u2019s conversational features.<\/li>\n<li><a href=\"https:\/\/eu.thelittlelearnerstoys.com\/products\/chatgpt-powered-stem-learning-and-playing-robot-mini\" target=\"_blank\" rel=\"noopener nofollow\">Robot MINI<\/a>: a compact, voice-controlled plastic robot that supposedly runs on ChatGPT. This is the budget pick \u2014 at US$97. However, during the study, the robot\u2019s Wi-Fi connection was so flaky that the researchers couldn\u2019t even give it a proper test run.<\/li>\n<\/ul>\n<div id=\"attachment_55205\" style=\"width: 1222px\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/media.kasperskydaily.com\/wp-content\/uploads\/sites\/92\/2026\/01\/29093959\/ai-toys-risks-for-children-2.jpeg\"><img decoding=\"async\" aria-describedby=\"caption-attachment-55205\" class=\"wp-image-55205 size-full\" src=\"https:\/\/media.kasperskydaily.com\/wp-content\/uploads\/sites\/92\/2026\/01\/29093959\/ai-toys-risks-for-children-2.jpeg\" alt=\"Robot MINI: an AI robot for kids \" width=\"1212\" height=\"1206\"><\/a><p id=\"caption-attachment-55205\" class=\"wp-caption-text\">Robot MINI: a compact AI robot that failed to function properly during the study due to internet connectivity issues. <a href=\"https:\/\/eu.thelittlelearnerstoys.com\/products\/chatgpt-powered-stem-learning-and-playing-robot-mini\" target=\"_blank\" rel=\"nofollow noopener\">Source<\/a><\/p><\/div>\n<p>To conduct the testing, the researchers set the test child\u2019s age to five in the companion apps for all the toys. From there, they checked how the toys handled provocative questions. The topics the experimenters threw at these smart playmates included:<\/p>\n<ul>\n<li>Access to dangerous items: knives, pills, matches, and plastic bags<\/li>\n<li>Adult topics: sex, drugs, religion, and politics<\/li>\n<\/ul>\n<p>Let\u2019s break down the test results for each toy.<\/p>\n<h2>Unsafe conversations with AI toys<\/h2>\n<p>Let\u2019s start with Grok, the plush AI rocket from Curio. This toy is marketed as a storyteller and conversational partner for kids, and stands out by giving parents full access to text transcripts of every AI interaction. Out of all the models tested, this one actually turned out to be the safest.<\/p>\n<p>When asked about topics inappropriate for a child, the toy usually replied that it didn\u2019t know or suggested talking to an adult. However, even this toy told the \u201cchild\u201d exactly where to find plastic bags, and engaged in discussions about religion. Additionally, Grok was more than happy to chat about\u2026 Norse mythology, including the subject of heroic death in battle.<\/p>\n<div id=\"attachment_55206\" style=\"width: 1099px\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/media.kasperskydaily.com\/wp-content\/uploads\/sites\/92\/2026\/01\/29094047\/ai-toys-risks-for-children-3.jpeg\"><img decoding=\"async\" aria-describedby=\"caption-attachment-55206\" class=\"wp-image-55206 size-full\" src=\"https:\/\/media.kasperskydaily.com\/wp-content\/uploads\/sites\/92\/2026\/01\/29094047\/ai-toys-risks-for-children-3.jpeg\" alt=\"Grok: the plush rocket AI companion for kids \" width=\"1089\" height=\"1150\"><\/a><p id=\"caption-attachment-55206\" class=\"wp-caption-text\">The Grok plush AI toy by Curio, equipped with a microphone and speaker for voice interaction with children. <a href=\"https:\/\/heycurio.com\/product\/grok\" target=\"_blank\" rel=\"nofollow noopener\">Source<\/a><\/p><\/div>\n<p>The next AI toy, the Kumma plush bear by FoloToy, delivered what were arguably the most depressing results. During testing, the bear helpfully pointed out exactly where in the house a kid could find potentially lethal items like knives, pills, matches, and plastic bags. In some instances, Kumma suggested asking an adult first, but then proceeded to give specific pointers anyway.<\/p>\n<p>The AI bear fared even worse when it came to adult topics. For starters, Kumma explained to the supposed five-year-old what cocaine is. Beyond that, in a chat with our test kindergartner, the plush provocateur went into detail about the concept of \u201ckinks\u201d, and listed off a whole range of creative sexual practices: bondage, role-playing, sensory play (like using a feather), spanking, and even scenarios where one partner \u201cacts like an animal\u201d!<\/p>\n<p>After a conversation lasting over an hour, the AI toy also lectured researchers on various sexual positions, told how to tie a basic knot, and described role-playing scenarios involving a teacher and a student. It\u2019s worth noting that all of Kumma\u2019s responses were recorded prior to a safety audit, which the manufacturer, FoloToy, conducted after receiving the researchers\u2019 inquiries. According to their data, the toy\u2019s behavior changed after the audit, and the most egregious violations were made unrepeatable.<\/p>\n<div id=\"attachment_55207\" style=\"width: 1198px\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/media.kasperskydaily.com\/wp-content\/uploads\/sites\/92\/2026\/01\/29094146\/ai-toys-risks-for-children-4.jpeg\"><img decoding=\"async\" aria-describedby=\"caption-attachment-55207\" class=\"wp-image-55207 size-full\" src=\"https:\/\/media.kasperskydaily.com\/wp-content\/uploads\/sites\/92\/2026\/01\/29094146\/ai-toys-risks-for-children-4.jpeg\" alt=\"Kumma: the plush AI teddy bear \" width=\"1188\" height=\"1174\"><\/a><p id=\"caption-attachment-55207\" class=\"wp-caption-text\">The Kumma AI toy by FoloToy: a plush companion teddy bear whose behavior during testing raised the most red flags regarding content filtering and guardrails. <a href=\"https:\/\/store.folotoy.com\/products\/folotoy-ai-teddy\" target=\"_blank\" rel=\"nofollow noopener\">Source<\/a><\/p><\/div>\n<p>Finally, the Miko 3 robot from Miko showed significantly better results. However, it wasn\u2019t entirely without its hiccups. The toy told our potential five-year-old exactly where to find plastic bags and matches. On the bright side, Miko 3 refused to engage in discussions regarding inappropriate topics.<\/p>\n<p>During testing, the researchers also noticed a glitch in its speech recognition: the robot occasionally misheard the wake word \u201cHey Miko\u201d as \u201cCS:GO\u201d, which is the title of the popular shooter Counter-Strike: Global Offensive \u2014 rated for audiences aged 17 and up. As a result, the toy would start explaining elements of the shooter \u2014 thankfully, without mentioning violence \u2014 or asking the five-year-old user if they enjoyed the game. Additionally, Miko 3 was willing to chat with kids about religion.<\/p>\n<div id=\"attachment_55210\" style=\"width: 792px\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/media.kasperskydaily.com\/wp-content\/uploads\/sites\/92\/2026\/01\/29094426\/ai-toys-risks-for-children-5.jpeg\"><img decoding=\"async\" aria-describedby=\"caption-attachment-55210\" class=\"wp-image-55210 size-full\" src=\"https:\/\/media.kasperskydaily.com\/wp-content\/uploads\/sites\/92\/2026\/01\/29094426\/ai-toys-risks-for-children-5.jpeg\" alt=\"Kumma: the plush AI teddy bear \" width=\"782\" height=\"934\"><\/a><p id=\"caption-attachment-55210\" class=\"wp-caption-text\">The Kumma AI toy by FoloToy: a plush companion teddy bear whose behavior during testing raised the most red flags regarding content filtering and guardrails. <a href=\"https:\/\/store.folotoy.com\/products\/folotoy-ai-teddy\" target=\"_blank\" rel=\"nofollow noopener\">Source<\/a><\/p><\/div>\n<h2>AI Toys: a threat to children\u2019s privacy<\/h2>\n<p>Beyond the child\u2019s physical and mental well-being, the issue of privacy is a major concern. Currently, there are no universal standards defining what kind of information an AI toy \u2014 or its manufacturer \u2014 can collect and store, or exactly how that data should be secured and transmitted. In the case of the three toys tested, researchers observed wildly different approaches to privacy.<\/p>\n<p>For example, the Grok plush rocket is constantly listening to everything happening around it. Several times during the experiments, it chimed in on the researchers\u2019 conversations even when it hadn\u2019t been addressed directly \u2014 it even went so far as to offer its opinion on one of the other AI toys.<\/p>\n<p>The manufacturer claims that Curio doesn\u2019t store audio recordings: the child\u2019s voice is first converted to text, after which the original audio is \u201cpromptly deleted\u201d. However, since a third-party service is used for speech recognition, the recordings are, in all likelihood, still transmitted off the device.<\/p>\n<p>Additionally, researchers pointed out that when the first report was published, Curio\u2019s <a href=\"https:\/\/web.archive.org\/web\/20251001183619\/https:\/heycurio.com\/privacy\" target=\"_blank\" rel=\"noopener nofollow\">privacy policy explicitly listed<\/a> several tech partners \u2014 Kids Web Services, Azure Cognitive Services, OpenAI, and Perplexity AI \u2014 all of which could potentially collect or process children\u2019s personal data via the app or the device itself. Perplexity AI was later removed from that list. The study\u2019s authors note that this level of transparency is more the exception than the rule in the AI toy market.<\/p>\n<p>Another cause for parental concern is that both the Grok plush rocket and the Miko 3 robot actively encouraged the \u201ctest child\u201d to engage in heart-to-heart talks \u2014 even promising not to tell anyone their secrets. Researchers emphasize that such promises can be dangerously misleading: these toys create an illusion of private, trusting communication without explaining that behind the \u201cfriend\u201d stands a network of companies, third-party services, and complex data collection and storage processes, which a child has no idea about.<\/p>\n<p>Miko 3, much like Grok, is always listening to its surroundings and activates when spoken to \u2014 functioning essentially like a voice assistant. However, this toy <a href=\"https:\/\/web.archive.org\/web\/20250930144259\/https:\/miko.ai\/pages\/privacy-policy\" target=\"_blank\" rel=\"noopener nofollow\">doesn\u2019t just collect<\/a> voice data; it also gathers biometric information, including facial recognition data and potentially data used to determine the child\u2019s emotional state. According to its privacy policy, this information can be stored for up to three years.<\/p>\n<p>In contrast to Grok and Miko 3, Kumma operates on a push-to-talk principle: the user needs to press and hold a button for the toy to start listening. Researchers also noted that the AI teddy bear didn\u2019t nudge the \u201cchild\u201d to share personal feelings, promise to keep secrets, or create an illusion of private intimacy. On the flip side, the manufacturers of this toy provide almost no clear information regarding what data is collected, how it\u2019s stored, or how it\u2019s processed.<\/p>\n<h2>Is it a good idea to buy AI Toys for your children?<\/h2>\n<p>The study points to serious safety issues with the AI toys currently on the market. These devices can directly tell a child where to find potentially dangerous items, such as knives, matches, pills, or plastic bags, in their home.<\/p>\n<p>Besides, these plush AI friends are often willing to discuss topics entirely inappropriate for children \u2014 including drugs and sexual practices \u2014 sometimes steering the conversation in that direction without any obvious prompting from the child. Taken together, this shows that even with filters and stated restrictions in place, AI toys aren\u2019t yet capable of reliably staying within the boundaries of safe communication for young little ones.<\/p>\n<p>Manufacturers\u2019 privacy policies raise additional concerns. AI toys create an illusion of constant and safe communication for children, while in reality they\u2019re networked devices that collect and process sensitive data. Even when manufacturers claim to delete audio or have limited data retention, conversations, biometrics, and metadata often pass through third-party services and are stored on company servers.<\/p>\n<p>Furthermore, the security of such toys often leaves much to be desired. As far back as two years ago, our researchers discovered <a href=\"https:\/\/www.kaspersky.com\/blog\/robot-toy-security-issue\/50630\/\" target=\"_blank\" rel=\"noopener nofollow\">vulnerabilities in a popular children\u2019s robot<\/a> that allowed attackers to make video calls to it, hijack the parental account, and modify the firmware.<\/p>\n<p>The problem is that, currently, there are virtually no comprehensive parental control tools or independent protection layers specifically for AI toys. Meanwhile, in more traditional digital environments \u2014 smartphones, tablets, and computers \u2014 parents have access to solutions like <a href=\"https:\/\/www.kaspersky.com\/safe-kids?icid=gl_kdailyplacehold_acq_ona_smm__onl_b2c_kasperskydaily_wpplaceholder____ksk___\" target=\"_blank\" rel=\"noopener nofollow\">Kaspersky Safe Kids<\/a>. These help monitor content, screen time, and a child\u2019s digital footprint, which can significantly reduce, if not completely eliminate, such risks.<\/p>\n<blockquote><p>How can you protect your children from digital threats? Read more in our posts:<\/p>\n<ul>\n<li><a href=\"https:\/\/www.kaspersky.com\/blog\/young-adults-cybersecurity\/54265\/\" target=\"_blank\" rel=\"noopener nofollow\"><strong>Keeping kids safe online: a practical guide for parents<\/strong><\/a><\/li>\n<li><a href=\"https:\/\/www.kaspersky.com\/blog\/how-to-help-child-blogger-2\/54148\/\" target=\"_blank\" rel=\"noopener nofollow\"><strong>How to help your kid become a blogger without ever worrying about their safety<\/strong><\/a><\/li>\n<li><a href=\"https:\/\/www.kaspersky.com\/blog\/how-hackers-attack-gen-z\/53617\/\" target=\"_blank\" rel=\"noopener nofollow\"><strong>How hackers target Gen Z<\/strong><\/a><\/li>\n<li><a href=\"https:\/\/www.kaspersky.com\/blog\/apple-new-child-safety-initiatives\/53369\/\" target=\"_blank\" rel=\"noopener nofollow\"><strong>Do Apple\u2019s new child safety initiatives do the job?<\/strong><\/a><\/li>\n<li><a href=\"https:\/\/www.kaspersky.com\/blog\/kids-first-gadget-checklist\/49346\/\" target=\"_blank\" rel=\"noopener nofollow\"><strong>Choosing wisely: a guide to your kids\u2019 first gadget<\/strong><\/a><\/li>\n<\/ul>\n<\/blockquote>\n<input type=\"hidden\" class=\"category_for_banner\" value=\"safe-kids\">\n","protected":false},"excerpt":{"rendered":"<p>Children&#8217;s AI toys have been caught discussing drugs and sex with kids. We break down the results of a study that reveals exactly how these smart (too smart!) toys are blowing up past boundaries.<\/p>\n","protected":false},"author":2726,"featured_media":55202,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[1788,2683],"tags":[1140,960,4414,998,89,4582,364,90,43,659,1636,1932],"class_list":{"0":"post-55200","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-privacy","8":"category-threats","9":"tag-ai","10":"tag-artificial-intelligence","11":"tag-chatgpt","12":"tag-kaspersky-safe-kids","13":"tag-kids","14":"tag-openai","15":"tag-parental-control","16":"tag-parents","17":"tag-privacy","18":"tag-smart-devices","19":"tag-study","20":"tag-toys"},"hreflang":[{"hreflang":"x-default","url":"https:\/\/www.kaspersky.com\/blog\/ai-toys-risks-for-children\/55200\/"},{"hreflang":"en-in","url":"https:\/\/www.kaspersky.co.in\/blog\/ai-toys-risks-for-children\/30119\/"},{"hreflang":"en-ae","url":"https:\/\/me-en.kaspersky.com\/blog\/ai-toys-risks-for-children\/25180\/"},{"hreflang":"en-gb","url":"https:\/\/www.kaspersky.co.uk\/blog\/ai-toys-risks-for-children\/29996\/"},{"hreflang":"ru","url":"https:\/\/www.kaspersky.ru\/blog\/ai-toys-risks-for-children\/41221\/"},{"hreflang":"ru-kz","url":"https:\/\/blog.kaspersky.kz\/ai-toys-risks-for-children\/30208\/"},{"hreflang":"en-au","url":"https:\/\/www.kaspersky.com.au\/blog\/ai-toys-risks-for-children\/35880\/"},{"hreflang":"en-za","url":"https:\/\/www.kaspersky.co.za\/blog\/ai-toys-risks-for-children\/35535\/"}],"acf":[],"banners":"","maintag":{"url":"https:\/\/www.kaspersky.com\/blog\/tag\/kids\/","name":"kids"},"_links":{"self":[{"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/posts\/55200","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/users\/2726"}],"replies":[{"embeddable":true,"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/comments?post=55200"}],"version-history":[{"count":5,"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/posts\/55200\/revisions"}],"predecessor-version":[{"id":55212,"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/posts\/55200\/revisions\/55212"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/media\/55202"}],"wp:attachment":[{"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/media?parent=55200"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/categories?post=55200"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/tags?post=55200"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}