FBI Can't Win: Apple Engineers Will Quit Before Unlocking iPhone

chimeracreator

New member
Jun 15, 2009
300
0
0
Lightknight said:
Let's say that they created a back door for new phones and such from the start. Something like bit locker that a key word could unlock. If left there, anyone who knew the word would get in. Now, let's say that the key word was tied to an algorithm that itself was under encryption with no back door.

For the sake of clarity this amounts to:

1. Generate the encryption key
2. Using a public key to decrypt the encryption key.


Lightknight said:
If left there, if the encryption ever got out or was successfully reverse engineered (not sure if non-back door encryption on it would prevent that) then anyone having it would be in. Next, let's include safety measures like it unlocking requiring second and third factor authentications that could include encrypted communication with a machine that has to be in physical range.
This part isn't possible. Cryptography doesn't respect physical demarcations. While the code you wrote to unlock the key could respect it if anyone got their hands on it they could easily remove these sections or fake them.

This leaves you with the much simpler and well examined approach of using a government held public key to encrypt and store a copy of your encryption key. In general this is how encrypted backup solutions work, and the approach has some merit. However from a technical standpoint it opens up two additional attacks:

1. Compromising the government's private key compromises all devices. This can be mitigated by limited the number of devices encrypted with each public-private key pair, but as private companies aren't supposed to hold these key the issue can not be negated.

2. Phones would still need to generate their own symmetric encryption keys which would then have to be encrypted and transmitted. This process is error prone and a skilled attacker could prevent the encryption key from ever being received by a government system.
 

Lightknight

Mugwamp Supreme
Nov 26, 2008
4,860
0
0
chimeracreator said:
Lightknight said:
Let's say that they created a back door for new phones and such from the start. Something like bit locker that a key word could unlock. If left there, anyone who knew the word would get in. Now, let's say that the key word was tied to an algorithm that itself was under encryption with no back door.

For the sake of clarity this amounts to:

1. Generate the encryption key
2. Using a public key to decrypt the encryption key.
I'm sort of talking more about what World of Warcraft does. You don't have a single key so much as the two sides have a pre-arranged agreement to generate a certain key if given certain parameters at a certain time. So while one key works for five minutes, it may never work again.

So I'm not talking about the key word being "applerules" and suddenly everyone can get into phones.
Lightknight said:
If left there, if the encryption ever got out or was successfully reverse engineered (not sure if non-back door encryption on it would prevent that) then anyone having it would be in. Next, let's include safety measures like it unlocking requiring second and third factor authentications that could include encrypted communication with a machine that has to be in physical range.
This part isn't possible. Cryptography doesn't respect physical demarcations. While the code you wrote to unlock the key could respect it if anyone got their hands on it they could easily remove these sections or fake them.
1. What if the key generator remains on an encrypted partition that itself has no back door? Bit Locker doesn't have to encrypt an entire drive and to be honest at multi-terabyte drives that could take a hell of a long time to do. But it can encrypt folders and partitions without encrypting the rest of the drive. The 2nd and 3rd factor authenticators would also be stored on devices with no back door in their encryption either.

2. Why would they be able to remove the two and three factor authentications if those are required to gain access to the phone in the first place? Hypothetically, even if you already knew the key you'd only have one of three or more keys. Hell, Apple could make them have to undergo a hundred factor authentication process and really gum up people's ability to rip them off. It could even be crazy where the employee has to apply the machine factor keys in a specific order.

3. I'm not talking about these things being given to the government. I'm talking about Apple and Microsoft and whomever else having a department like this and being able to bill the government for their services of unlocking these phones. I think it's totally bogus for the government to force anyone to do a job. The government can't enact temporary slavery just because they don't know how to get into a phone. Hypothetically, this would also produce a side business for these companies if someone gets locked out of their own phone and can prove identification.

I'd love it if they code named their encryption Ring at this point and and their decryption department Mordor...

4. I know you say they don't respect physical boundaries. But all I'm saying is that in addition to email based two factor authentication, they could also set up entire machines that are fully encrypted (again, no back doors present on these key machines) that themselves have to shake hands with the phones. Machines that are never exposed to the internet and generally only shake hands with these phones. That could present a markedly more difficult challenge to spoof in that you'd literally have to be one of a handful of people who maintain these machines that could have any kind of serious access to them.

The future probably has a bunch of apps that encrypt your data instead. That would probably be impossible to police. But for the major carriers it's all about casting the widest net.


5. The initial symmetric key is really just to start the communication. The idea is to assume that this will eventually get out but to make it hard enough that it should be years before that is reverse engineered. Hopefully years after the next phone has already replaced it and made un-encrypting it pointless. If you really think about it, our current market only needs about 3 years minimum to delay things. If they can get to 5 or more years then for all intents and purposes it will succeed.

6. Is there any secure way to update security on phones? My guess is since it is transmitted that it might be impossible. But if there are secure connections that can be made or perhaps if they have people go into physical store fronts to update then it's not unreasonable that every couple of years they could change the algorithm and factor keys for the security conscious.
 

chimeracreator

New member
Jun 15, 2009
300
0
0
Lightknight said:
chimeracreator said:
Lightknight said:
Let's say that they created a back door for new phones and such from the start. Something like bit locker that a key word could unlock. If left there, anyone who knew the word would get in. Now, let's say that the key word was tied to an algorithm that itself was under encryption with no back door.

For the sake of clarity this amounts to:

1. Generate the encryption key
2. Using a public key to decrypt the encryption key.
I'm sort of talking more about what World of Warcraft does. You don't have a single key so much as the two sides have a pre-arranged agreement to generate a certain key if given certain parameters at a certain time. So while one key works for five minutes, it may never work again.
So I'm not talking about the key word being "applerules" and suddenly everyone can get into phones.
Lightknight said:
If left there, if the encryption ever got out or was successfully reverse engineered (not sure if non-back door encryption on it would prevent that) then anyone having it would be in. Next, let's include safety measures like it unlocking requiring second and third factor authentications that could include encrypted communication with a machine that has to be in physical range.
These systems use encryption, but they aren't actually encryption systems themselves. These are authentication systems that use a shared symetric key which is generated at the time the device is initially synced. Both devices then generate a hash using their shared key plus a time epoch which lets them authenticate to each other.

Unfortunately none of this helps to actually securely encrypt the device's encryption key. For that you need to use public key cryptography.

Lightknight said:
1. What if the key generator remains on an encrypted partition that itself has no back door? Bit Locker doesn't have to encrypt an entire drive and to be honest at multi-terabyte drives that could take a hell of a long time to do. But it can encrypt folders and partitions without encrypting the rest of the drive. The 2nd and 3rd factor authenticators would also be stored on devices with no back door in their encryption either.
The security of the key generator isn't the primary problem with this scheme. This is part of Kerckhoffs's principle. Also to run the generator you would still need to decrypt it, and if the generator was in constant use it would need to stay decrypted. As such it could be harvested from memory if the host was compromised, which is a technique that's widely used to steal credit card numbers already.

Lightknight said:
2. Why would they be able to remove the two and three factor authentications if those are required to gain access to the phone in the first place? Hypothetically, even if you already knew the key you'd only have one of three or more keys. Hell, Apple could make them have to undergo a hundred factor authentication process and really gum up people's ability to rip them off. It could even be crazy where the employee has to apply the machine factor keys in a specific order.
I'm not exactly sure what you're saying here. Authorization controls normal access to a system. Direct physical access allows you to bypass authorization layers, which leaves only encryption. If you're talking about adding authorization layers onto the server system which hosts it that can help, but that won't stop them from being a target for hackers.

Lightknight said:
3. I'm not talking about these things being given to the government. I'm talking about Apple and Microsoft and whomever else having a department like this and being able to bill the government for their services of unlocking these phones. I think it's totally bogus for the government to force anyone to do a job. The government can't enact temporary slavery just because they don't know how to get into a phone. Hypothetically, this would also produce a side business for these companies if someone gets locked out of their own phone and can prove identification.
These companies honestly have a hard enough time securing themselves against everyone who wants to hack them now. They don't want to add an even more tempting target on their backs.

Lightknight said:
4. I know you say they don't respect physical boundaries. But all I'm saying is that in addition to email based two factor authentication, they could also set up entire machines that are fully encrypted (again, no back doors present on these key machines) that themselves have to shake hands with the phones. Machines that are never exposed to the internet and generally only shake hands with these phones. That could present a markedly more difficult challenge to spoof in that you'd literally have to be one of a handful of people who maintain these machines that could have any kind of serious access to them.
Again, this has to do with the authentication phase to get the master key not encryption.

Lightknight said:
5. The initial symmetric key is really just to start the communication. The idea is to assume that this will eventually get out but to make it hard enough that it should be years before that is reverse engineered. Hopefully years after the next phone has already replaced it and made un-encrypting it pointless. If you really think about it, our current market only needs about 3 years minimum to delay things. If they can get to 5 or more years then for all intents and purposes it will succeed.
Nope, the first symmetric key is for the phone itself. This is the only key which can actually decrypt data on the phone. Everything else is just a protection for it, including the fact that it is encrypted based on a much weaker user provided passkey.

Lightknight said:
6. Is there any secure way to update security on phones? My guess is since it is transmitted that it might be impossible. But if there are secure connections that can be made or perhaps if they have people go into physical store fronts to update then it's not unreasonable that every couple of years they could change the algorithm and factor keys for the security conscious.
In general yes, it is possible to update the security on phones. However the most secure systems available use purpose built chips for this and as such they cannot be upgraded, but also aren't vulnerable to a host of attacks that the iPhone can be hit with.
 

Lightknight

Mugwamp Supreme
Nov 26, 2008
4,860
0
0
Let's keep in mind that my main point is that we need to start viewing the idea of having a back door and keeping it safe as a problem to try to resolve. Not just sticking with the claim that it would instantly be broken.

We really just need to plan for ways to delay hackers by 5+ years for it to be a total success.

chimeracreator said:
Lightknight said:
1. What if the key generator remains on an encrypted partition that itself has no back door? Bit Locker doesn't have to encrypt an entire drive and to be honest at multi-terabyte drives that could take a hell of a long time to do. But it can encrypt folders and partitions without encrypting the rest of the drive. The 2nd and 3rd factor authenticators would also be stored on devices with no back door in their encryption either.
The security of the key generator isn't the primary problem with this scheme. This is part of Kerckhoffs's principle. Also to run the generator you would still need to decrypt it, and if the generator was in constant use it would need to stay decrypted. As such it could be harvested from memory if the host was compromised, which is a technique that's widely used to steal credit card numbers already.
Kerckhoff's principle. Fantastic, I'm speaking to the right sort of person.

While it could hypothetically be harvested from memory, please understand that something like a bit-locked device can't really be harvest in the same way that a non-encrypted device is. I keep using bit-locker because this is the form of encryption that I'm most familiar with. The only case study I've seen is where they had a bit-locked device cracked was a scenario where the device was already set up in a lab and the encrypted drive was currently open so they could hard freeze it rapidly to obtain the key from the memory. This would be unrealistic in the real world and would only help you with one device rather than all of them. Even then, the multi-factor keys wouldn't be in memory because the user wouldn't be accessing the phone via that method, they'd be using their standard password or retinal scan or whatever else. Let's not forget that even with a non-back door encrypted phone that the hard freeze technique would still reveal the user password in memory. The user-password that is. But you still basically have to have the person enter the information and then freeze it right then. Not very useful.

Also, let me pose a question to you on the WoW two factor authentication process. Why hasn't that been cracked? Surely there would have been a lot of money in it just like there was to hack peoples' accounts.

Lightknight said:
2. Why would they be able to remove the two and three factor authentications if those are required to gain access to the phone in the first place? Hypothetically, even if you already knew the key you'd only have one of three or more keys. Hell, Apple could make them have to undergo a hundred factor authentication process and really gum up people's ability to rip them off. It could even be crazy where the employee has to apply the machine factor keys in a specific order.
I'm not exactly sure what you're saying here. Authorization controls normal access to a system. Direct physical access allows you to bypass authorization layers, which leaves only encryption. If you're talking about adding authorization layers onto the server system which hosts it that can help, but that won't stop them from being a target for hackers.
I'm not talking about the regular encryption to get into the phone that a user uses. I'm talking about gaining access to the backdoor to that regular encryption. In order to activate the backdoor you'd have to first enter the key which would hypothetically start the phone in an alternate limited mode which would then require the multi-factor authentications to go further. There are all kinds of potential tricks they could add to it. Small things that would make a hacker's time a lot more difficult if they didn't work in this department for Apple and if Apple's employees didn't spill the beans. Like let's say the limited mode also gives you access to alter the time/date and no one tells you that you have to set the phone to a particular date before entering certain keys or something. I mean, they can make the whole process really convoluted. But those would only be good as long as your employees maintained silence on the matter. But again, as with Kerckhoff's principle the employees remaining silent would not be assumed.

Lightknight said:
3. I'm not talking about these things being given to the government. I'm talking about Apple and Microsoft and whomever else having a department like this and being able to bill the government for their services of unlocking these phones. I think it's totally bogus for the government to force anyone to do a job. The government can't enact temporary slavery just because they don't know how to get into a phone. Hypothetically, this would also produce a side business for these companies if someone gets locked out of their own phone and can prove identification.
These companies honestly have a hard enough time securing themselves against everyone who wants to hack them now. They don't want to add an even more tempting target on their backs.
These companies have the source code for everything they make already in their networks right now. If they get hacked and lose vital stuff like that then there's no helping them from hackers. This would make them no less secure or no more a target for hackers. It just means that if they are successfully hacked, that what hackers get would be easier than getting a source code to reverse engineer later.

But again, this department should be non-network or internet facing. It should be its own work group or maybe the machines serving as the multi-factor authentication shouldn't even be connected to one another directly either. So to hack them you'd have to do so on site and one at a time (or multiple connections at once). If I were to set something like this up my two primary responsibilities would be to ensure that the machines are only accessible within the rooms where phones would be opened and no where else and also to monitor my employees vigilantly.

Make no mistake though, what I'm proposing Apple do would be a business service that they'd be paid to do. So even if they are incurring additional risk, it's no different than any other venture they might start up.

Lightknight said:
4. I know you say they don't respect physical boundaries. But all I'm saying is that in addition to email based two factor authentication, they could also set up entire machines that are fully encrypted (again, no back doors present on these key machines) that themselves have to shake hands with the phones. Machines that are never exposed to the internet and generally only shake hands with these phones. That could present a markedly more difficult challenge to spoof in that you'd literally have to be one of a handful of people who maintain these machines that could have any kind of serious access to them.
Again, this has to do with the authentication phase to get the master key not encryption.
Let's talk hypothetically for a second. Let's say that all bit locked hard drives had two keys. Whatever the user sets up and a key word that Bill Gates knows.

If Bill Gates himself sat at a desk and entered said password himself whenever the FBI brought him a hard drive, how would that system be broken, hypothetically? The assumption being that he won't spill the beans.

With that in mind, imagine 3 or more machines which won't spill the beans because they can never communicate with the outside world that also know a key and only communicate it with a phone that is directly hooked into them and has already been given the necessary preceding keys and the algorithm log in?

Lightknight said:
5. The initial symmetric key is really just to start the communication. The idea is to assume that this will eventually get out but to make it hard enough that it should be years before that is reverse engineered. Hopefully years after the next phone has already replaced it and made un-encrypting it pointless. If you really think about it, our current market only needs about 3 years minimum to delay things. If they can get to 5 or more years then for all intents and purposes it will succeed.
Nope, the first symmetric key is for the phone itself. This is the only key which can actually decrypt data on the phone. Everything else is just a protection for it, including the fact that it is encrypted based on a much weaker user provided passkey.
I'm talking about the back door key I'm proposing in my hypothetical. It wouldn't be used to access the phone the same way the current user password does. It would be used to start the back door sequence. Once activated the phone then starts listening for the designated machines and their multi-factor keys.

The back door should be a different way into the phone. Not just a second way to enter the same path. I would assume similar principles of bricking the phone would apply if that back door was accessed incorrectly too.

Lightknight said:
6. Is there any secure way to update security on phones? My guess is since it is transmitted that it might be impossible. But if there are secure connections that can be made or perhaps if they have people go into physical store fronts to update then it's not unreasonable that every couple of years they could change the algorithm and factor keys for the security conscious.
In general yes, it is possible to update the security on phones. However the most secure systems available use purpose built chips for this and as such they cannot be upgraded, but also aren't vulnerable to a host of attacks that the iPhone can be hit with.
I was just wondering if it would be possible to change up the algorithm and expected multi-factor keys from time to time. If you have a system that cannot be reasonably decrypted for, let's say a year, it would be crafty to push out a new algorithm every 11 months to reset the time to crack clock.
 

chimeracreator

New member
Jun 15, 2009
300
0
0
Lightknight said:
Let's keep in mind that my main point is that we need to start viewing the idea of having a back door and keeping it safe as a problem to try to resolve. Not just sticking with the claim that it would instantly be broken.

We really just need to plan for ways to delay hackers by 5+ years for it to be a total success.

Also, let me pose a question to you on the WoW two factor authentication process. Why hasn't that been cracked? Surely there would have been a lot of money in it just like there was to hack peoples' accounts.
In the case of Blizzard this is because they likely maintain their own token servers. So if you can own the token server you could have just as easily have owned the user's accounts more directly. This has happened to another company however in 2011 RSA was hacked and this caused the seeds for their secure tokens to be leaked. As a direct result of this Lockheed Martin suffered a major cyber attack as they were using these tokens as a second factor for remote authentication.

Lightknight said:
I'm not talking about the regular encryption to get into the phone that a user uses. I'm talking about gaining access to the backdoor to that regular encryption. In order to activate the backdoor you'd have to first enter the key which would hypothetically start the phone in an alternate limited mode which would then require the multi-factor authentications to go further. There are all kinds of potential tricks they could add to it. Small things that would make a hacker's time a lot more difficult if they didn't work in this department for Apple and if Apple's employees didn't spill the beans. Like let's say the limited mode also gives you access to alter the time/date and no one tells you that you have to set the phone to a particular date before entering certain keys or something. I mean, they can make the whole process really convoluted. But those would only be good as long as your employees maintained silence on the matter. But again, as with Kerckhoff's principle the employees remaining silent would not be assumed.

These companies have the source code for everything they make already in their networks right now. If they get hacked and lose vital stuff like that then there's no helping them from hackers. This would make them no less secure or no more a target for hackers. It just means that if they are successfully hacked, that what hackers get would be easier than getting a source code to reverse engineer later.

But again, this department should be non-network or internet facing. It should be its own work group or maybe the machines serving as the multi-factor authentication shouldn't even be connected to one another directly either. So to hack them you'd have to do so on site and one at a time (or multiple connections at once). If I were to set something like this up my two primary responsibilities would be to ensure that the machines are only accessible within the rooms where phones would be opened and no where else and also to monitor my employees vigilantly.

Make no mistake though, what I'm proposing Apple do would be a business service that they'd be paid to do. So even if they are incurring additional risk, it's no different than any other venture they might start up.
I think it might be best to take a step back and quickly go over how encryption on a device like a phone or hard drive works.

1. An encryption key K1 is generated to encrypt and decrypt the device.

2. A given block of data D is encrypted and then this encrypted data E is written to the physical device. So F(K1, D) = E

3. This is repeated for all of the blocks on the device using the same key usually with some sort of counter function or similar mechanism to prevent statistical attacks that could identify a block's content.

4. A user then generates a second encryption key K2 based on a password and some information in the device.

5. K1 is then encrypted using K2 and then stored on the device as EK1. F(K2, K1) = EK1.

6. Accessing data then follows the pattern: F(K2, EK1) = K1 which is then used to get data from the phone. In all cases the decrypted version of K1 is stored in memory for performance reasons once EK1 has been extracted.

The following will always be true for a device:

1. EK1 must be accessible in base storage.
2. K1 must be generated before EK1.
3. K1 is the only key which can decrypt the data on a device. Having another key do this would result in a linear increase in the size of the data.
4. K1 can be encrypted using multiple keys K3, K4, etc which will allow for multiple users to access its contents without a significant change in the storage size.

As such when we discuss a government key we are discussing creating another key K3 to access K1. We also know that when K3 is generated it must be sent to a computer to collect it. As such the device must be online and the computer it is connecting to must be online. This precludes the secure database of encryption keys from being offline which is a massive risk.


Lightknight said:
Let's talk hypothetically for a second. Let's say that all bit locked hard drives had two keys. Whatever the user sets up and a key word that Bill Gates knows.

If Bill Gates himself sat at a desk and entered said password himself whenever the FBI brought him a hard drive, how would that system be broken, hypothetically? The assumption being that he won't spill the beans.

With that in mind, imagine 3 or more machines which won't spill the beans because they can never communicate with the outside world that also know a key and only communicate it with a phone that is directly hooked into them and has already been given the necessary preceding keys and the algorithm log in?
This isn't possible with symmetric key encryption since any device which used Gate's key to encrypt K1 could also use the key to decrypt K1. As his key would be on all devices for this to work the scheme could be cracked by anyone who owned a phone.

This is somewhat possible with public key cryptography as encryption and decryption keys are different. As such you could supply PubK3 to everyone and restrict PrivK3 to an offline system. That said, if anyone compromised PrivK3 they would own every system at once.


Lightknight said:
I'm talking about the back door key I'm proposing in my hypothetical. It wouldn't be used to access the phone the same way the current user password does. It would be used to start the back door sequence. Once activated the phone then starts listening for the designated machines and their multi-factor keys.

The back door should be a different way into the phone. Not just a second way to enter the same path. I would assume similar principles of bricking the phone would apply if that back door was accessed incorrectly too.
Lightknight said:
I was just wondering if it would be possible to change up the algorithm and expected multi-factor keys from time to time. If you have a system that cannot be reasonably decrypted for, let's say a year, it would be crafty to push out a new algorithm every 11 months to reset the time to crack clock.
Encryption keys don't work in terms of multiple factors only authentication systems do. You can use a key splitting algorithm to split the private key, but at some point this will need to always come together. Also, in all honestly based on how bureaucracies work this will end up being through an internet connected system made by a single vendor resulting in a single point of failure when it gets breached.
 

Lightknight

Mugwamp Supreme
Nov 26, 2008
4,860
0
0
Thank you for taking all this time to engage with me in this line of thinking. I really appreciate it.

chimeracreator said:
Lightknight said:
Let's keep in mind that my main point is that we need to start viewing the idea of having a back door and keeping it safe as a problem to try to resolve. Not just sticking with the claim that it would instantly be broken.

We really just need to plan for ways to delay hackers by 5+ years for it to be a total success.

Also, let me pose a question to you on the WoW two factor authentication process. Why hasn't that been cracked? Surely there would have been a lot of money in it just like there was to hack peoples' accounts.
In the case of Blizzard this is because they likely maintain their own token servers. So if you can own the token server you could have just as easily have owned the user's accounts more directly. This has happened to another company however in 2011 RSA was hacked and this caused the seeds for their secure tokens to be leaked. As a direct result of this Lockheed Martin suffered a major cyber attack as they were using these tokens as a second factor for remote authentication.
Ok, so how is this different from what I'm proposing except Apple's token servers won't even be public facing and therefore even more secure as being only vulnerable to the actual employees and oversight in that area? Why can blizzard succeed at a public facing token server for a decade and Apple somehow couldn't succeed at an internal private workgroup token server?

I personally think we just believe in being able to have privacy and so our responses are tainted by that bias. In many ways, I consider our phones to be an augmentation of ourselves and forcing us to relinquish them is not unlike forcing us to forfeit the 5th amendment. But even with that belief, I'm not going to pretend like this isn't possible to happen in an enclosed environment. Hell, I don't even believe had they coded a back door in this circumstance that that code would have ever seen the light of day. Apple would have performed all the security measures I already brought up from making that environment entirely offline to having people watching what the employees developing this are doing. I think Apple is just defending what they see as the right to privacy and not being honest about just how secure they can be about this in the same way we haven't seen their equally vulnerable source code popping out into the world. Because that already exists and would also provide the tools necessary for other teams to create a back door too.

Lightknight said:
I'm not talking about the regular encryption to get into the phone that a user uses. I'm talking about gaining access to the backdoor to that regular encryption. In order to activate the backdoor you'd have to first enter the key which would hypothetically start the phone in an alternate limited mode which would then require the multi-factor authentications to go further. There are all kinds of potential tricks they could add to it. Small things that would make a hacker's time a lot more difficult if they didn't work in this department for Apple and if Apple's employees didn't spill the beans. Like let's say the limited mode also gives you access to alter the time/date and no one tells you that you have to set the phone to a particular date before entering certain keys or something. I mean, they can make the whole process really convoluted. But those would only be good as long as your employees maintained silence on the matter. But again, as with Kerckhoff's principle the employees remaining silent would not be assumed.

These companies have the source code for everything they make already in their networks right now. If they get hacked and lose vital stuff like that then there's no helping them from hackers. This would make them no less secure or no more a target for hackers. It just means that if they are successfully hacked, that what hackers get would be easier than getting a source code to reverse engineer later.

But again, this department should be non-network or internet facing. It should be its own work group or maybe the machines serving as the multi-factor authentication shouldn't even be connected to one another directly either. So to hack them you'd have to do so on site and one at a time (or multiple connections at once). If I were to set something like this up my two primary responsibilities would be to ensure that the machines are only accessible within the rooms where phones would be opened and no where else and also to monitor my employees vigilantly.

Make no mistake though, what I'm proposing Apple do would be a business service that they'd be paid to do. So even if they are incurring additional risk, it's no different than any other venture they might start up.
I think it might be best to take a step back and quickly go over how encryption on a device like a phone or hard drive works.

1. An encryption key K1 is generated to encrypt and decrypt the device.

2. A given block of data D is encrypted and then this encrypted data E is written to the physical device. So F(K1, D) = E

3. This is repeated for all of the blocks on the device using the same key usually with some sort of counter function or similar mechanism to prevent statistical attacks that could identify a block's content.

4. A user then generates a second encryption key K2 based on a password and some information in the device.

5. K1 is then encrypted using K2 and then stored on the device as EK1. F(K2, K1) = EK1.

6. Accessing data then follows the pattern: F(K2, EK1) = K1 which is then used to get data from the phone. In all cases the decrypted version of K1 is stored in memory for performance reasons once EK1 has been extracted.

The following will always be true for a device:

1. EK1 must be accessible in base storage.
2. K1 must be generated before EK1.
3. K1 is the only key which can decrypt the data on a device. Having another key do this would result in a linear increase in the size of the data.
4. K1 can be encrypted using multiple keys K3, K4, etc which will allow for multiple users to access its contents without a significant change in the storage size.

As such when we discuss a government key we are discussing creating another key K3 to access K1. We also know that when K3 is generated it must be sent to a computer to collect it. As such the device must be online and the computer it is connecting to must be online. This precludes the secure database of encryption keys from being offline which is a massive risk.
Not really, the goal of "K3" to toggle password protection off so that access to K1 is no longer combined with K2. The back door would be making K2 "togglable... toggable... toggleable?" If and only if K3 is entered.

And no, the special conditions of K3 might require a wired connection to a particular machine. They don't both have to be online if the intention of K3 is that this ONLY be performed by Apple in their designated work space while connected to their offline computers. Since each of the machines represent part of K3, it is imperative that they not be exposed to the general internet or even the greater network. Instead, they should at most be a small separate work group or a distinct domain with no forest trusts aligned with the other domains.


Lightknight said:
Let's talk hypothetically for a second. Let's say that all bit locked hard drives had two keys. Whatever the user sets up and a key word that Bill Gates knows.

If Bill Gates himself sat at a desk and entered said password himself whenever the FBI brought him a hard drive, how would that system be broken, hypothetically? The assumption being that he won't spill the beans.

With that in mind, imagine 3 or more machines which won't spill the beans because they can never communicate with the outside world that also know a key and only communicate it with a phone that is directly hooked into them and has already been given the necessary preceding keys and the algorithm log in?
This isn't possible with symmetric key encryption since any device which used Gate's key to encrypt K1 could also use the key to decrypt K1. As his key would be on all devices for this to work the scheme could be cracked by anyone who owned a phone.

This is somewhat possible with public key cryptography as encryption and decryption keys are different. As such you could supply PubK3 to everyone and restrict PrivK3 to an offline system. That said, if anyone compromised PrivK3 they would own every system at once.
Ok, so then this is possible, the weak point is PrivK3. Which is why I'm proposing making PrivK3 entirely offline and closely controlled as well as being as convoluted as possible to ensure no individual has access to all of the machines without direct oversight. So imagine something like four desks with a solitary dock that plugs connects the iPhone automatically to all of the keys. No interface except on the phone itself.

For someone to steal "K3" they've have to do a number of overtly suspicious and frankly alarming things. Let's also assume that part of the process of the final piece of K3 being entered includes clearing the memory.


Lightknight said:
I'm talking about the back door key I'm proposing in my hypothetical. It wouldn't be used to access the phone the same way the current user password does. It would be used to start the back door sequence. Once activated the phone then starts listening for the designated machines and their multi-factor keys.

The back door should be a different way into the phone. Not just a second way to enter the same path. I would assume similar principles of bricking the phone would apply if that back door was accessed incorrectly too.
Lightknight said:
I was just wondering if it would be possible to change up the algorithm and expected multi-factor keys from time to time. If you have a system that cannot be reasonably decrypted for, let's say a year, it would be crafty to push out a new algorithm every 11 months to reset the time to crack clock.
Encryption keys don't work in terms of multiple factors only authentication systems do. You can use a key splitting algorithm to split the private key, but at some point this will need to always come together. Also, in all honestly based on how bureaucracies work this will end up being through an internet connected system made by a single vendor resulting in a single point of failure when it gets breached.
Ok, sure, if they expose it to the internet then they 'dun fuk'd up'. I'm not talking about how they could set it up unsecurely, I'm talking about the possibility of creating a K3 that would be nearly impossible to crack and certainly not possible within five years.

Sounds like you acknowledge a private non-public facing token server would successfully do this. Especially in light of WoW having done this for a decade with private public facing token servers which is even more insane.

I have one small question I haven't asked yet. Could Apple send an update to the phone that disables security as well (essentially turning off the password requirement)? I assume it couldn't be that simple or hackers would have just created pseudo update spoofs to the same effect.
 
Jan 27, 2011
3,740
0
0
rednose1 said:
It's about setting a precedent. As soon as the courts say the F.B.I. is in the right, they'll use that example for all other cases (think they have 12 right now) to force compliance down the road. They're not trying to win this one fight, but all the others down the road. The F.B.I. is using this case in particular to argue because it sounds so good in the media. (Apple wont help us fight terrorists!!)
This is probably what's going on. They likely don't care too much about this phone in particular (they already have a lot of other leads), but they hope to use this case to compel tech companies to crack their own encryption wide open.

Considering the NSA debacle, I do not trust the fucking FBI with that kind of power.
 

chimeracreator

New member
Jun 15, 2009
300
0
0
Lightknight said:
Thank you for taking all this time to engage with me in this line of thinking. I really appreciate it.
Not a problem

Lightknight said:
Ok, so how is this different from what I'm proposing except Apple's token servers won't even be public facing and therefore even more secure as being only vulnerable to the actual employees and oversight in that area? Why can blizzard succeed at a public facing token server for a decade and Apple somehow couldn't succeed at an internal private workgroup token server?
Sorry I might have been a bit off topic with the RSA bit. A token server cannot function offline nor can it be used for encryption. Tokens are only usable for authentication.

Lightknight said:
Not really, the goal of "K3" to toggle password protection off so that access to K1 is no longer combined with K2. The back door would be making K2 "togglable... toggable... toggleable?" If and only if K3 is entered.
This is not possible while maintaining a mathematically sound encryption algorithm. What you described here is a flawed algorithm and introducing that would invalidate encryption straight away. What you need (and how these schemes really work) is that you generate EK1, EK1', EK1''' and so on. Then the owner of K3 can decrypt EK1' using F(K3, EK1') = K1. This is the only mathematically sound way to handle multi-user access to an encrypted device without allowing each user to know the others key.


Lightknight said:
Ok, so then this is possible, the weak point is PrivK3. Which is why I'm proposing making PrivK3 entirely offline and closely controlled as well as being as convoluted as possible to ensure no individual has access to all of the machines without direct oversight. So imagine something like four desks with a solitary dock that plugs connects the iPhone automatically to all of the keys. No interface except on the phone itself.

For someone to steal "K3" they've have to do a number of overtly suspicious and frankly alarming things. Let's also assume that part of the process of the final piece of K3 being entered includes clearing the memory.
Yes, using public key cryptography it is possible to secure store your key offline. This has been done before for extended periods of time, but in the end these keys are always valuable targets.

Also we're at a strange point right now for modern public key algorithms as all of these are known to be weak against quantum computers. Odds are this won't be an issue for around twenty or so years, but it is worth remembering.

Lightknight said:
Ok, sure, if they expose it to the internet then they 'dun fuk'd up'. I'm not talking about how they could set it up unsecurely, I'm talking about the possibility of creating a K3 that would be nearly impossible to crack and certainly not possible within five years.

Sounds like you acknowledge a private non-public facing token server would successfully do this. Especially in light of WoW having done this for a decade with private public facing token servers which is even more insane.
A token server can't do this because as I said before tokens are for authentication not encryption. However a public key could be used to encrypt data relatively securely.

Lightknight said:
I have one small question I haven't asked yet. Could Apple send an update to the phone that disables security as well (essentially turning off the password requirement)? I assume it couldn't be that simple or hackers would have just created pseudo update spoofs to the same effect.
Apple can currently send an update to a phone which would alter core OS functionality. This cannot be done by hackers who do not have access to Apple's private code signing key. Since all Apple devices check the digital signature Apple creates using this a third party OS could not run.

However as I said before all of this may break in the face of quantum computing. Ignoring that this is secure so long as Apple keeps the private key secure. Just like encrypting K1 with PubK3 would be secure as long as an organization kept PrivK3 secure.

The trouble is that any agency who compromised PrivK3 or Apple's code signing key might not tip there hand until it would be really nasty. So we can't know how well Apple has secured this. All we can know is that no public breach using it has occurred.