Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> It's horrible how things like the Philips Hue bridge work and rely on insecure HTTP to control your home lighting.

The Philips hue bridge REST API is accessible in the local network like http://192.168.1.123/api/ .... which is great since apps/wepapps can talk to the bridge without a cloud or philips server inbetween.

And this is the very problem, it's not possible for Philips to add HTTPS support to the hue bridge without some sort of cloud roundtrip to a Philips server, keeping the very cool feature to talk only within the local network to the bridge.

Because how could that be deployed without self-signed certificates and the usual browser exceptions and warnings?



If it is an http API, you don't really need a public certificate. You can have a long term selfsigned certificate on the device and you check that the thumbprint hasn't changed every time you connect from your client. These big warning windows are for connecting to it from a browser, not a RestClient.


>These big warning windows are for connecting to it from a browser, not a RestClient.

Which in my case is a webapp running in a browser :)


So any random webpage can talk to your Hue bridge?

I'm surprised they even allow such cross-domain requests, but this anyway doesn't seem safe.


Cross-domain is needed, otherwise an app/webapp couldn't talk to the bridge since the bridge only serves the REST API.

However in order to send control commands and query light states the app/webapp needs to authenticate and create a account, which is only possible for a few seconds after pressing a physical bridge button.


Why not just proxy the REST requests through your own backend HTTPS server-app ?


We have the same issue with Glowing Bear (https://github.com/glowing-bear/glowing-bear). It's a web frontend for an IRC client (WeeChat) that connects directly to WeeChat via WebSockets. Sort of like self-hosted irccloud without a cloud. We really want everyone to use encrypted connections [0] and push people onto the TLS version of Glowing Bear. But some people host their WeeChat on their local network, and you can't (realistically) get a certificate for a local IP. So for those people we need to open an unencrypted websocket (ws://1.2.3.4), which isn't possible from an https site. Ideally we'd like to disallow unencrypted connections to non-local destinations but that's practically impossible to determine in JS. It's a super annoying problem.

Disallowing unsafe websockets from secure origins is one of those policies that is a really good idea 99.5% of the time but for those last 0.5% of use cases, it's a major pain in the bum.

[0] WeeChat has a /exec command to execute arbitrary commands, and the client has access to that --- not great when you transmit your password in plain text.


> it's not possible for Philips to add HTTPS support to the hue bridge without some sort of cloud roundtrip to a Philips server

I don't see why not. The alternative is freedom. Philips doesn't have to lock their devices. That's a choice they made, sadly the choice that most companies make.

> Because how could that be deployed without self-signed certificates and the usual browser exceptions and warnings?

The fact that your browser warns you about insecure communication happening from that web page, that's a good thing. Even iff you deliberately choose accept that and believe that there's no other way for this particular service/device.

The simple fact that you accept some insecure traffic, doesn't make it secure.


> > it's not possible for Philips to add HTTPS support to the hue bridge

> I don't see why not.

That's not a constructive argument. I don't see how they could make it work?

Even if they somehow solve the problem of giving these devices domain names and even if they generate separate private key for each unit, the key and cert are going to be embedded in the firmware and a sufficiently sophisticated attacker will just extract them and become able to impersonate some Philips device.

How the user of another device is going to tell whether he is connecting to his device or to malicious neighbor impersonating neighbor's device to establish Philips-signed HTTPS with the victim and then another connection to victim's device and MITM the victim?

You would have to make all users install a trusted certificate authority tied to their individual device. Which is a UX disaster in current browsers and also a security disaster, because if this becomes a norm, sooner or later somebody will sell you a toy device bundled with a CA crafted to give him the ability to impersonate any website. And you'll trust this CA because you want to play with the toy.

This maybe could be made to work with some improvements in browser UI. Make it easier to add new roots of trust. Make it easier to learn and/or limit what websites these certs will be authorized to authenticate. But nothing like that exists now.

> The fact that your browser warns you about insecure communication happening from that web page, that's a good thing. [...] The simple fact that you accept some insecure traffic, doesn't make it secure.

True. As somebody pointed out elsewhere in this thread, this warning will become another EU cookie banner nothingburger.


> > > it's not possible for Philips to add HTTPS support to the hue bridge

> > I don't see why not.

> That's not a constructive argument. I don't see how they could make it work?

Missing from your quote: The alternative is freedom. Philips doesn't have to lock their devices.

If Philips (and other companies, obviously this doesn't relate to just Philips) would provide a community access to their devices and software rather than locking them out, I believe that this problem would not exist.

The original issue is that having a public website (used over TLS) that interacts with local network devices without TLS shows warnings about insecure communication. Again, the warning is shown because it /is/ insecure. There are plenty alternatives of securely interacting with an IOT device. Plain HTTP from a public website is just not one of them. For example, look at how Apple's Homekit has implemented that. Homekit is not usable from a public web page in a web browser. That's a good thing. (aside: I'm not a big fan of Homekit but their security is not bad)

So if vendors are annoyed with browser warnings, it's because /they/ are doing the wrong thing, not the browsers.

> sufficiently sophisticated attacker will just extract them and become able to impersonate some Philips device

Just like on any website. Just because something isn't 100% unbreakable, doesn't mean it's a bad idea (you do lock your doors, don't you?)


> Missing from your quote: The alternative is freedom. Philips doesn't have to lock their devices. > If Philips (and other companies, obviously this doesn't relate to just Philips) would provide a community access to their devices and software rather than locking them out, I believe that this problem would not exist.

The problem is technical and won't be fixed by just opening the software.

> There are plenty alternatives of securely interacting with an IOT device.

Please name just one which works for webapps, beside HTTPS.

> So if vendors are annoyed with browser warnings, it's because /they/ are doing the wrong thing, not the browsers.

Homekit is nice but not available to webapps, apps of course can take advantage of several security mechanisms.

My whole rant is about browsers and HTTPS in non public networks.

For webapps which want to talk to IoT devices there is only HTTPS and there is _no_ sane way to provide robust, local!, access to a LAN device via HTTPS.

Here are some requirements: (actually real, I'm working on a IoT'ish product)

* webapp must be served via HTTPS, either from the IoT device or vendor site

* it just works! if webapp served from IoT device, the user shall not be required to install certificate or set exception (because it then looks like scary as hell malware)

* the webapp must work offline (service worker or appcache) without internet connection * webapp must be able to talk directly to the device, no cloud or vendor server inbetween

* the IoT device which provides a secured REST API might be in in a LAN which is NOT connected to the internet -- so the '<random-stuff-id>.vendor.com DNS resolves to device IP with an Lets Encrypt CA' approach won't work here (otherwise nice hack)

To my knowledge it's technically not possible to build such a HTTPS secured webapp in a local network today without breaking the mentioned requirements.


I think this is part of the larger problem of the relentless "cloud first" movement that the whole industry seems to have adopted. I feel like there isn't any new software, device or standard in development by now that doesn't demand constant internet access and a dedicated background service. Even basic things that should have no business relying on internet access get swalloed by that. (Browsers, operating systems, cars...)

The economic incentives that push everyone in that direction are obvious but I think in the end that will lead to more harm than good.

That being said, I can understand that making IoT devices directly accessible from web page JS fü could cause some security headaches:

As an example, apparently a lot of recent exploits were caused by programs opening a loopback-only REST service for IPC. Those services weren't secured because, hey, if someone can talk to loopback, the system is compromised anyway. The developers didn't realize that any webpage open in a browser can do that via script (respecting CORS) and so even loopback services should be considered exposed to the internet.

I can imagine that an IoT device offering a browser-accessible REST interface might cause similar non-obvious attack vectors. So at the least, it would have to implement some kind of user management and authentication - which might be challenging for small devices.

I think what we really need is some kind of dedicated standard for browsers talking to things on the LAN. Such a standard could then handle discovery, certificate management and authentication/permissions in one go - and would enable browsers to present a good UI for those steps.

However, right now everyone seems to busy developing intricate rube-goldberg machines[1] to care and the agenda of the browser vendors seems to go in the opposite direction - so I don't have high hopes.

I think practically, the most feasible step right now is to forego browsers and build an app instead. Then you have to deal with the headaches of app development but at least you get a nice user experience without any backend services...

[1] https://twitter.com/isotopp/status/877444175708475393


> The problem is technical and won't be fixed by just opening the software.

I strongly disagree. Main reason is that if Philips opened their firmware to the public, it would have had different protocols by now than just HTTP with a poor mans JSON API.

> Homekit is nice but not available to webapps, apps of course can take advantage of several security mechanisms.

That's why basically my point is to /not/ use a web app to control local insecure IOT devices.

> Please name just one which works for webapps, beside HTTPS.

Use locally resolvable DNS names and wildcard certificates signed by commonly trusted (public) CAs. It's been done before (Plex does something like this IIRC).

* update: I just noticed another comment [1] that mentions Plex with a link to some technical details [2].

[1] https://news.ycombinator.com/item?id=14751768

[2] https://blog.filippo.io/how-plex-is-doing-https-for-all-its-...


> it would have had different protocols by now than just HTTP with a poor mans JSON API.

Sure, but now you can't control the gadget from the browser and the vendor needs to write an application or something for whatever shitty OS you want to use.

> Use locally resolvable DNS names and wildcard certificates signed by commonly trusted (public) CAs. It's been done before (Plex does something like this IIRC).

Not that simple. Public CAs will likely only give you certs for domains you own (like plex.direct) and your users generally don't have nameservers authoritative for such domains on their LANs (maybe you could pull it off if you are a router vendor, but not with IoT light bulbs) so they have to query your public nameserver and the system fails without Internet connection.

And there is no easy solution: if your light bulb could register an xxx.philips.com domain via UPnP on your router or via SMB on your Windows box, it would be very much unclear what exactly should prevent it from registering philips.com as well.


> Just like on any website. Just because something isn't 100% unbreakable, doesn't mean it's a bad idea (you do lock your doors, don't you?)

Don't you think it's a completely different thing to extract keys from a remote server (try https://news.ycombinator.com/ for example) and a physical gadget you own?

Doubly so if the gadget is open source, as you apparently prefer.


Not really, for a hacker both are remote servers, aren't they? I agree that in practice many security updates are not provided for IOT devices (another reason for FOSS), so it might get easier and at the same time less relevant to extract the keys.

If a gadget is open source doesn't mean the private keys are.. Most internet servers are running open source software (BSD, Linux).

In my opinion the manufacturer should /not/ have your gadget's private key. But that's not really related to this problem.


I was talking about a different scenario: I buy the same kind of light bulb you own, extract its private key and use it to either:

1. impersonate your light bulb, because they both have the same key

2. impersonate my light bulb, because you and your browser can't tell the difference

To prevent such attacks, each device needs its own certificate and key and then furthermore you need one of the following:

1. each certificate is signed by a unique CA which you add to your browser's list of trusted CAs so that it doesn't trust other devices' certs because they are signed by different CAs

2. each device has a globally unique domain and you type this domain into the browser

3. maybe some other equally cumbersome solution


> 1. impersonate your light bulb, because they both have the same key

Definitely don't give both the same key.

> 2. impersonate my light bulb, because you and your browser can't tell the difference

Who cares if you can impersonate your own lightbulb?

> 1. each certificate is signed by a unique CA

This doesn't change either scenario. If they shared a key then custom CAs don't stop impersonation. If each device has its own key and CA then they still can't impersonate your device, and they still can impersonate their device.

> 2. each device has a globally unique domain and you type this domain into the browser

Typing in "jzhf.hue.com" sound easier than figuring out what IP has been assigned to the device.


> Who cares if you can impersonate your own lightbulb?

For MITM - you think you are connecting to your device, actually it's my proxy (DNS spoof, ARP spoof, TCP hijack, ...), you still get the green bar in your browser saying "über secure Philips lightbulb", you just don't know it's mine because the domain matches and it's signed by the same CA (assuming neither of these protections is in place).

> If each device has its own key and CA then they still can't impersonate your device, and they still can impersonate their device.

Without manual installation of my CA your browser won't accept the certificate ripped from my device.

You said in another post that providing correct address is better than per-device CA. No doubt it's more convenient in a commercial product, assuming you can solve the DNS problem somehow (which doesn't seem possible without working Internet connection or editing hosts file). From pure security standpoint though, I feel like per-device CA has an added advantage of resistance to typosquatting. But it's getting academic now, it's hard to squat if it takes buying a physical device with the right ID.


  I don't see how they could make it work?
Plex achieves this with a very convoluted setup [1] - they set up a DNS server so that 1-2-3-4.625d406a00ac415b978ddb368c0d1289.plex.direct returns IP address 1.2.3.4, then they issue a single user a wildcard certificate for *.625d406a00ac415b978ddb368c0d1289.plex.direct

Of course, you have to get a special deal from a CA at who-knows-what-cost - likely meaning open source projects need not apply. And you get a dependency on cloud infrastructure, if they stop issuing certs you end up in a bad place. And you get a giant, ugly URL. And you have to make a DNS lookup so traffic leaves your network anyway.

It's an ugly solution with a lot of downsides - but I doubt the CA/Browser Forum plans to give people much choice in the matter, so it's their way or the highway :-|

[1] https://blog.filippo.io/how-plex-is-doing-https-for-all-its-...


I don't see why you couldn't do that with Let's Encrypt, especially since they just announced they'll start giving out free wildcard certs.


wildcard certs are not a solution to this problem. Sharing a private cert with all customers isn't what the solution does. every customer gets their own cert

second letsenrypt has low limits of 20 certs per week. so imagine VLC added a Plex like streaming feature. they'd need far far more than 20 certs a day given how large their user base is


wildcard certs are not a solution to this problem. Sharing a private cert with all customers isn't what the solution does. every customer gets their own cert

That's not what I mean. I mean the same solution as described by michaelt above, that is, provide a different wildcard cert per user.

second letsenrypt has low limits of 20 certs per week. so imagine VLC added a Plex like streaming feature. they'd need far far more than 20 certs a day given how large their user base is

Remember that the limit is only on the number of new users; Let's Encrypt has a renewal exemption that lets you renew your certs even after hitting the 20/week limit. So while it might still not be enough for VLC, I don't think it's a problem for most projects. Plus you can always use more than one domain.


> I don't think it's a problem for most projects

Pretty much any open source project that was to need certs similar to plex would pass this limit the moment they mentioned it on HN. Why should an open source projected have to register hundreds of domains just to handle this case? Someone else gave a long list of the number of devices and services running in his house that need certs like plex. Effectively every router, nas, IP camera, and other networked device that exposes a web interface and therefore every open source project that does those, OpenWRT for example, FreeNAS, ZoneMinder, etc...


BTW, who really is Let's Encrypt, why should I trust them, why should I trust they won't disappear once plain HTTP is no longer supported by cargo-cult-security-conscious browsers?

It seems to me like providing certificates isn't exactly free, in itself.


Say they disappear, so what? You're left in the exact same situation as before they've appeared, except with some money saved in the meantime.


You must have missed

once plain HTTP is no longer supported by cargo-cult-security-conscious browsers

There already are people talking about such possibility and some even appear to believe it would be a good idea.

Of course what happens then is that without Let's Encrypt you are stuck paying other CAs to have anything published on the Web at all.

<tinfoil hat on>LE is a conspiracy of CAs to phase out unencrypted HTTP and ensure them infinite money stream.

<tinfoil hat off>Even if it isn't, LE will disappear five months after their mission is done because what the heck, why bother.

I just wonder if there is any reason to believe that users of LE are any smarter than kids accepting free candy from pedos? Maybe there are reasons but I just haven't heard them yet.


Ah, I think I'm missing an assumption you're making: that LE is indispensable (or almost) for browsers to deprecate HTTP.

Personally, I think the deprecation (as in, the warning bells and reduced priority, not full blocking) was going to happen anyway, and LE was mostly inconsequential, even if it makes the transition easier.

As for LE being a CA conspiracy, I don't think that makes much sense considering their funders (eg. Mozilla, Google) and those funders relationships with existing CAs (see WoSign, Symantec). But anything's possible.


This is better than HTTP because complexity breeds security, right?


HTTPS, in and of itself, is extremely complex. So I might advise against that argument.

And the Plex system sounds quite awkward, but not particularly complex.


> That's not a constructive argument. I don't see how they could make it work?

Give each one a subdomain that resolves to its local IP, and give it a valid certificate for that subdomain.

> extract them and become able to impersonate some Philips device.

Or the attacker could just have a real, non-impersonated Philips device. If the user deliberately points their browser at the wrong device's site, nothing can save them. This is a very different problem from securing access to the correct site.

> You would have to make all users install a trusted certificate authority tied to their individual device.

That's not true, and I don't even understand what benefit that would have.

If you have a way to deliver a CA, instead you should deliver the correct address of the device. This makes 'MitM' impossible without any downsides.


I would suggest that browsers should support some kind of TOFU for self-signed certificates used by non-publicly accessible web servers.

What if they'd just ask the user to accept and install a certificate when connecting to a local server for the first time?


>I don't see why not.

Well then please explain how this is possible?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: