An engineer got curious about how his iLife A11 smart vacuum worked and monitored the network traffic coming from the device. That’s when he noticed it was constantly sending logs and telemetry data to the manufacturer — something he hadn’t consented to. The user, Harishankar, decided to block the telemetry servers’ IP addresses on his network, while keeping the firmware and OTA servers open. While his smart gadget worked for a while, it just refused to turn on soon after. After a lengthy investigation, he discovered that a remote kill command had been issued to his device.
The fact that this isn’t considered outright fraud is disturbing. This person OWNS the device, yes? They’re not leasing it.
FFS, this should be illegal.
I agree with you that this should be illegal. I expect this was in the terms of service, though. Since we have no laws restricting this kind of bullshit, the company can argue that they’re within their rights.
We need some real legislation around privacy. It’s never going to happen, but it needs to. We need a right to anonymity but that is too scary for advertisers and our police state.
I expect this was in the terms of service, though
While I expect the same, there’s also just a reasonablility standard. If Meta and Google updated their TOS to say that users agreed to become human chattle slaves to mine cobalt and forfeit their rights, no court (…right, SCOTUS?..right?) would uphold that. A TOS is a contract, but it’s mostly for the protection of companies from liability. Takign active steps to brick someone’s device over the device not connecting to it’s C2 server (the company had zero evidence this was done intentionally and a router firewall misconfiguration could just have easily done the same thing), is IMO something that should result in a lawsuit.
How often are the terms of service evident at the time of purchase? It’s unreasonable to assume at the checkout that the price is only for a limited time of use. I doubt the put it on the box or on the Amazon page when you purchased stuff like this. Are you supposed to buy it and then return it after reading the fine print in the instruction booklet after opening it up?
Unfortunately this is from a Chinese company and China will never make it illegal; hell they’re more likely to pass a law requiring ILIFE to share the personal data with the government than tell them not to collect them. This could be enforced for US based companies but as long as we buy luxury goods from China this is going to be a fact of life.
My robot vac will only operate when connected to the Internet so it’s only allowed to communicate when actually in use. As soon as it returns to the charger Internet access is automatically blocked.
Unfortunately the manufacturer has deliberately made this as inconvenient as possible. If communication is blocked for more than a few hours the vacuum loses all maps and will no longer even load saved maps from the Tuya app. To use it the vac must be powered down and the app killed. Only then can a saved map be restored.
It’s too bad it’s so useful.
Name and shame.
from the Tuya app.
My robot vac will only operate when connected to the Internet
That would trigger me to return it to the store. “It doesn’t work”
If I don’t own it 100% then reimburse me if you disable it.
For me the worst part is that someone developed the functionality to monitor and track, until the signal is lost, and if so, kill. It’s really crazy how daring this is.
How is this legal?
Shitty terms of service.
Say it with me. If buying doesnt mean 100% ownership…
then they will buy it
I have just purchased a Dreame L10s Ultra and have had the PCB for a breakout board made and components for setting it up ordered. In a few days I should get the last bits and I will be able to root the device and have it connect to Valetudo managed through Home Assistant. Fully local operation with basically the same features but none of the privacy issues. As soon as I can get it connected I will be able to use it just like a robot I actually own should without some random third party being involved in every single operation.
I specifically got one which can run valetudo and it works great for over two years now. Without sending images of my flat to china or the us
I specifically got a Dreame L10s Pro Ultra so that I could use Valetudo on it. Got the needed adapter on eBay to do it but have had no time as of late to follow the steps as there are quite a few things needed to get it done.
The devs are very touchy, from what I understand, but I get it as the general public can be vexing to take questions/feedback from.
I don’t think any compatible machines can be acquired in my region any more. The only one I saw semi recently had a revision a few years ago but no packaging or model change to match so you can’t verify if its the older model that works or the newer one that doesn’t.
remote kill command had been issued to his device.
What the actual fuck?!
Old news that’s been posted several times in the last weeks
while this is good, we really don’t need all these smart devices in the first place
As a layman, can someone explain what the ramifications of smart devices sharing your data is. I know it’s bad, but I don’t understand why it’s bad and how it’s used against you.
The problem that is created by a person’s private data being collected against their will is primarily a philosophical one similar to the “principle of least privilege”, which you may be familiar with. The idea is that those collecting the data have no reasonable need for access to it in order to provide the services they’re providing, so their collection of that information can only be for something other than the user’s benefit, but the user gets nothing in exchange for it. The user is paying for the product/service they get, so the personal data is just a bonus freebie that the vendor is making off with. If the personal data is worthless, then there is no need to collect it, and if it does have worth, they are taking something of value without paying for it, which one might call stealing, or at least piracy. To many, this is already enough to cry foul, but we haven’t even gotten into the content and use of the collected data yet.
There is a vibrant marketplace among those in the advertising business for this personal data. There are brokers and aggregators of this data with the goal of correlating every data point they have gotten from every device and app they can find with a specific person. Even if no one individual detail or set of details presents a risk or identifies who the specific person is, they use computer algorithms to analyze all the data, narrowing it down to exactly one individual, similar to the way the game “20 questions” works to guess what object the player is thinking of–they can pick literally any object or concept in the whole world, and in 20 questions or less, the other player can often guess it. If you imagine the advertisers doing this, imagine how successful they would be at guessing who a person is if they can ask unlimited questions forever until there can be no doubt; that is exactly what the algorithm reading the collected data can do.
There was an infamous example of Target (the retailer) determining a young girl was pregnant before she told anyone or even knew herself, and created a disastrous home situation for her by sending her targeted maternity marketing materials to her house, which was seen by her abusive family.
These companies build what many find to be disturbingly invasive dossiers on individuals, including their private health information, intimacy preferences, and private personal habits, among other things. The EFF did a write-up many years ago with creepy examples of basic metadata collection that I found helpful to my understanding of the problem here:
https://www.eff.org/deeplinks/2013/06/why-metadata-matters?rss=1
Companies have little to no obligation to treat you fairly or even do business with, allowing them to potentially create a downright exile situation for you if they have decided you belong on some “naughty list” because of an indicator given to them by an algorithm that analyzed your info. They can also take advantage of widely known weaknesses in human psychology to influence you in ways that you don’t even realize, but are undeniably unethical and coercive. Also, it creates loopholes for bad actors in government to exploit. For example, in my country (USA), the police are forbidden from investigating me if I am not suspected of a crime, but they can pay a data broker $30 for a breakdown of everything I like, everything I do, and everywhere I’ve been. If it was sound government policy to allow arbitrary investigation of anyone regardless of suspicion, then ask yourself why every non-authoritarian government forbids it.
I know that’s a lot; it is a complicated topic that is hard to understand the implications of. Unfortunately, everyone that could most effectively work to educate everyone on those risks is instead exploiting their ignorance for a wide variety of purposes. Some of those purposes are innocuous, but others are ethically dubious, and many more are just objectively nefarious. To be clear, the reason for the laws against blanket investigations was to prevent the dubious and nefarious uses, because once that data is collected, it isn’t feasible to ensure it will stay in the right hands. The determination was that potential net good of this kind of data collection is far outweighed by the potential net negatives.
I hope that helps!
That’s an interesting article. The biggest insight is that it can go around hippa for example.
A detailed room-mapping scan is basically a wealth report disguised as vacuum telemetry: square footage, room count, layout complexity, “bonus” spaces like offices or nurserie; all of it feeds straight into socioeconomic profiling. And once companies have that floor plan, they’re not just storing it; they’re monetizing it, feeding it into ad networks, data brokers, and pricing algorithms that adjust what you see (=and what you pay) based on the shape of your living space.
And a mapped floor plan also quietly exposes who lives in the home, how they move, and what can be inferred from that.
Isn’t this information already available? Like if I’m house shopping I know how many rooms the house has and the area of the house.
You know rough dimensions you don’t have a robot going through and literally mapping every item on the floor, high traffic areas , details about amount of people that live there, possible pets, and then tying it to your IP and then selling that to advertisers.
The crazy thing isn’t that they do that it’s that you have to pay money for an item that then does that without your permission and if you attempt to stop it they brick your item that you paid hundreds of dollars for
I don’t know for certain if they sell your data (but they probably do) but you can use a wifi router and how it reflects in a room you can fully map a room with enough accuracy that you can tell what a person is typing on a keyboard which is kind of terrifying if you think about it
How is that information used?
They sell it, some of it is sold to advertisers but recently companies like palantir have been buying these large collections of data, de anonymizing it and then they can use it to develop profiles about people which they can then sell to the government
And that’s what they admit to doing
Once your data is out there it’s essentially impossible to get it back
Property records won’t tell companies how many people are living at a unit, who they are, how they use the space, when they use the space, how they arrange the furniture, etc., and they don’t provide live data streams from your house.
I hadn’t really thought about how the furniture is arranged. I wonder if that’s something they sell to designers so they can then see what’s trending. Some of them don’t use cameras, but use lidar, but still getting an overall shape of things would seem useful to a designer.
You might get some snarky comments, but the way I envision it is that the fuller of a picture companies can get of you (when you’re running a vacuum, when you’re driving, when your lights are on and off, etc.) the more data they have to try and run predictive analytics on your behavior and that can be used in a variety of ways that may or may not benefit you. At this point it’s mostly just to get you to buy things they think you’ll buy, but what happens when your profile starts to match up with someone who commits crimes? Maybe you get harassed by the authorities a little more often? Generally the lack of consent around how the data is collected and how it’s used is the problem most people have.
what happens when your profile starts to match up with someone who commits crimes?
I’d dismiss this as fanciful ten years ago. But we’ve got ICE agents staking out grocery stores and flea markets looking for anyone passably “illegal”. Palantir seems to have made a trillion dollar business model out of promising an idiot president the ability to Minority Report crime. And then you’ve got the Israeli’s Lavendar AI and “Where’s Daddy” programs, intended to facilitate murdering suspects by bombing the households of relatives.
I guess it wouldn’t hurt to be a little bit more paranoid.
Yeah, mine has it. I have to go into the app once a week and manually delete it.
Libre alternative?
This shit is two months old. How many times is it going to recirculate?









