My talk will be on research that was conducted in an academic setting but is motivated very much by industry and by the issues of the world of computing in the next generation.
I started by looking at directions in wireless and personal communication, in order to analyze what is coming. I saw a situation where mobile computing is getting everywhere, and it is going to be in the physical world and the information world and therefore new and important applications are coming. As users, we will all have mobile computer besides computers in the office or at home. Embedded wireless devices are going to be everywhere, which will turn objects into active objects and we saw it in the talk in the morning, the vision from the Korean government. Of course mobile phone, RFID devices, they are going to be embedded in objects and so on, or in some sense they are going to be sensors everywhere. These devices, computers and physical objects are going to have better self-awareness; for example, they may have GPS units, so they are going to be aware of their location.
So we see two trends. The tracking of objects in real-time and integration of physical and electronic world and this implies changes, actually huge changes in supply chain of real-time information that will revolutionize logistics. Location oriented e-marketing is another example. Location oriented multimedia content and so on.
So looking at the situation in which information objects and other objects are all connected with this wireless and aware of their environment, I call it ActiveWorld because they are going to be active in communication at all times. I ask myself what are the implications to security and I concluded that these are going to be disruptive changes in the way things are, and these imply new challenges and problems but also new opportunities when it comes to security, privacy, and cryptographic technologies. I want to comment that security technology is both disablers of attacks, so the security mechanisms are disablers of attacks, but they are also enablers as a positive infrastructure for new applications.
So, in the rest of this talk, I will cover a few points. I will start with general design issues, new ideas for this new technology. Then I will talk about the good and the bad aspects of this ActiveWorld from the security point of view. Then I will talk about some basic problems and basic design issues in this ActiveWorld and I will demonstrate new systems, new designs that are motivated by these issues, this part may have a little bit technical material in it, I will point it out, and then I will conclude.
We need to think about these new issues and I want to remind you that first of all it takes time for ideas that are abstract to make it into products and systems, but it is true that new ideas are followed and eventually embedded in new technologies. So, maybe the best example in the area of cryptography is public key cryptography. It was suggested as a really theoretical idea at the time. The computing resources for getting a secure system were thought to be very high but it became, much more easily because computing power increases, that is slower and now they are a reality and they are being used. I want to remind you that from an industry point of view if idea is materialized too late, it is a problem. There may not be a return on investment but public key became mature just on time to be ready for the Internet. That was a good example.
So, I will give you some more ideas about how abstract ideas go to practice and I will start things from personal experience. Symmetric Use of Public Keys, for example that the client and the user, only the user side has a public key and they can still perform key exchange and secure the channel between them. This was almost a comment in a paper that I presented in ’85 in Crypto. But in the mid-90s, SSL exactly adopted this model where only servers have public key and indeed in that paper we claimed that this will be the deployment of public infrastructure which is going to be scalable when you put the keys only at the server side. Indeed, people in Netscape in the middle of 90s adopted this idea to SSL.
The second example is in 1990, I worked on chosen Ciphertext Secure Public Key Encryption, but it was presented as a theoretical notion where the decryption device is challenged with Ciphertexts as part of the attack just in order to understand these notions of security as part of theoretical cryptography. But this notion became truly important when actually in Crypto ’98 where Bleichenbacher showed that the SSL implementations, those servers that are available on the Internet, this idea here, are actually in the way encryption was used with them, they are actually exposed by these chosen ciphertext attack.
More ideas. "Information Flow Models" to model the security of messaging was motivated by military applications in the 70s and the 80s but this inspired the notion of firewalls and virtual private networks. Another example is Key Pre-Distribution for multi-user systems. I worked on it in the 80s and back to something that I have done in the past, and then they have been suggested recently to be used in sensor networks, which are also part of ActiveWorld due to the simple math-ematics of computation by which you can compute these keys. In these sensor networks, the networks are elements, the network nodes are small and instead of using other cryptographic schemes, these key pre-distribution systems are suggested. I want to tell you that sensors and RFID tokens are the current "performance limited" environments for cryptography.
So, I started before by saying that public key, when suggested in the 80s, was considered theoretical that you can not implement it in general computing environment. Now you can, and so in hardware, definitely in software too. But now there is a rule that at any time there is something that is performance limited, and nowadays these are RFID tokens and sensors and PCs are okay.
I talked about how ideas can come from the abstract theoretical side and in security research it has been demonstrated many times how they can come and influence the practice of actual system. I want to mention one remark because we are looking at the new systems and new designs when we are talking about this ActiveWorld, so there I want to remark about idea integration. So, you can take an idea and integrate it in systems, and it is easier done when it is a separate component like a firewall. It has a very definite place in your network or is an add-on component like SSL. It is much harder to integrate ideas when you have to influence the entire system, so treating security as an afterthought, after you design the system then you integrate security, is very hard. For example, combining full scale PKI to application like secure Email is very hard, so the idea is to integrate security efforts when designing. Now we have new wireless, new network. We are talking about the new world. Try to integrate security at its core, at its infrastructure, it will be much easier because a lot of the solutions that we have for the Internet are needed, because the Internet was originally designed without security in mind. So, this was about ideas and about their integration. Now, that we are talking about this ubiquity, this new world, which I call ActiveWorld.
The next point that I want to make is about the fact that new technology is a double-edged sword, because new technology creates new problems. For example, the Internet is exposed to more attacks when we have a web, we have more attacks and so on. This is obvious. But this new technology creates opportunities. So for example with networks, now we have network connectivity, we can have more affordable network backup services.
Here are some examples in this ActiveWorld of exposures. Devices may be lost. I have my device that I carry with me everywhere and I lose it. It is much more easy to lose than the mainframe of the 80s. Tracking it invades privacy. Communication, if open, is revealing and just the pattern of communication may be revealing as well. This integration of physical and electronic world make confinement, make insulation of subsystems much harder.
But the point is that there are also advantages in ActiveWorld. There is a device for every user. Every user has now a device and the device has certain characteristics and this is an advantage. We may have various sources of communications from an object; redundant channel, so a user can use the Internet from his PC, and he has also a mobile phone. Now he has two channels to the user and we can design certain sub-system from scratch because it is a new world. We can take care of privacy with correct balance between personal privacy and reducing the possible abuse of anonymity.
So these exposures and advantages of the new technologies are "research opportunities". I am looking at it from research in academia and industry because we can take advantage of them, and these are opportunities because we need to design against these new exposures. I will demonstrate this point.
For example, I have a physical device, that is, a mobile phone with a few RFID tokens on it. It means that I have a strong security token. For example, one way to use it; users nowadays rely on passwords to log into their PCs, and these passwords are usually quite weak and there are attacks against them. With these handheld devices, users can now access their PCs in a different way. They first access their, let us say, mobile phone with a password and biometrics just by holding it, and then with something that can read their fingerprint or something else, but from then on they can activate this device to replace the password. So this device can with local communication log into the PC with some cryptographic function that is much stronger than the password. It has more entropy and with the device we can get automatic "single sign-on" to other applications.
Another characteristic of those devices is that they are human-held. I have my phone. This means that there is a human behind the device, and we can take advantage of this. For example, many attacks rely on users setting up automatic programmed attacks, like botnets and so on. Distributed denial of service attack is an example. But if we make sure with a handheld device that the program is activated by a human, this is a way to stop such attacks. So, we can ask an independent channel from a user to authenticate itself as a human and then these automated attacks can be stopped, and this can also be used for digital right management to assure a better remote control over content distribution.
But as I said, you cannot get it for free. Since now this handheld device is used for security, we need strongly secure management of these critical devices. If it is used for security it becomes part of the critical infrastructure. We need online monitoring and location control over them and all these issues require novel security procedures beyond the current practices, which we have now of mobile device management, and those new management procedures of devices may also violate privacy.
In my examples I mentioned two things. The first is the problem with the exposure of mobile devices that they may get lost. The second exposure is the fact that we may lose privacy. I will demonstrate these issues with two public key infrastructure (PKI)-related research projects that I have taken in the last few years. One is the notion of key insulated cryptography, and the other one is what we call traceable signatures.
I want to remind you just what is PKI. PKI is that each user has a public key that is used for verifying signatures that the users sign messages with a secret key that corresponds to this public key.
I will talk about key insulated cryptography. The motivation for it is that a cryptosystem relies on the position of secret, a key to perform tasks, and the question is what if this key is lost. For example, it is easier to steal mobile devices. So maybe somebody steals the device and reads our key and this is one of the most serious threats in real life because it is usually easier to steal a key than to break the underlying cryptography. To break a signature key, you may need to factor a very large composite number whereas to steal somebody’s mobile phone you do not need such a great deal of mathematics.
The question is can we do anything if the key is lost. The first answer is probably not because all the security relies on a key and now it is lost. But in fact the question is, and the answer is that we can compartmentalize, we can limit the damage. For example, we can limit the damage of stealing a mobile phone with a signing key by using a helper, using the home computer to help us with the security. We have this key exposed, can we do anything?
Usually when we have a public key, it is assumed that we have it let us say for three years or five years, one period of time in which it is valid. But in key-insulated security, we have refined this large time unit of five years into periods, small periods, let us say, every month, or every week or some granularity of time. We have N periods of times in which we have a single public key, a single public key for the entire N period. So in usual public key system, this PK will have like SK, one secret key that is good for the entire five years. But here we will have an initial secret key SK0 and its time period i there will be a new secret key SKi, which will be computed. There would be an update function that will take into account SKi-1 and i and will compute the new secret key SKi.
For each period, the public key PKi of the period will be the same public key for all the periods, but there will be an awareness of i, so public key essentially does not change. This is the idea to keep the same public key for five years as before. The public operation, which is the signature verification, in our case, will be done always with PKi, but the secret operation, the signing in our example is going to be done with a corresponding SKi.
Then what is the goal? The goal is that if we have T time units in which the secret key SKi1, SKi2 up to SKit is exposed. Somebody exposed your mobile phone in these five weeks or six weeks. But it keeps on being updated. What we want is that any other period except the exposed ones remain secure. For example, you go on a trip with your mobile phone which you use to sign documents, and you get exposed. Exposure is limited to that week only, because the following week, there is going to be an update, and the signatures for the succeeding weeks can not be generated by the key that was exposed a week earlier. You need to update from period to period and for this we introduced this helper key that helps in the update function.
In my example if the secret key is held on your mobile phone, the helper may be your PC at home which is in a much more secure environment or in the office. So you go to the office and you perform the update, and this key SK* at the helping station is only used for helping. It is not used for signing. Only the mobile phone is the device that signs.
So, this is a good example how you can use mobile devices and stationary devices like the PC at home, to get more security. You can use this idea to protect mobile keys, as I explained, but you can also use it for limited-time delegation or escrow in which in one week you give the signing key to one device, and the next week you can give it to another device.
Another application, you can use this key-insulated cryptography for proxy signatures. An example of a proxy signature is, the manager goes on vacation for one week, and for that week he wants his secretary to sign for him. He does not have to give the entire key. He gives a capability to the secretary just to sign for this week and not beyond it. With key-insulated, you can do proxy signatures.
Now, I will show you a simple design of a signature scheme which is key-insulated from any regular signature scheme such that it does not matter how many periods are exposed, the rest of the periods are secure. The subscriber pays some amount to allow two or three signatures in this scheme.
The public key will be two verification keys, the verification key of the user and verification key of the helper. There is going to be SK0 which is the secure key or the signing key of the user which corresponds to the verification key of the user and there is going to be the SK* of the helper SKH, which will be the signature key of verification key H.
We start with two public keys, two verification keys and their two secret keys. This secret key at the signing device at my mobile phone and this SKH at my PC, which is my helper. For each time period i, I will have a secret key ski will be the secret key of the user, a new secret key which I called ski and signature with the key of the helper on vki and i. Vki is the verification key of this new signature key and you can look at signature of age, of the value of vk and i is the certificate for the verification key i. The update will be my helper on my home computer sends to me the new secret key ski, assigns the verification key of this key and in a period with its key, it sends these two things to the mobile, and it also sends ski and vki. So this is the update. You get a new, really new key in the signature scheme certification of this verification key and the period by the helper key, which is part of the public key, and these are the new keys for this period, ski and vki.
What is a signature on a message m at period i? First of all, I sign with the period key Sigski(m) with Sigu(m, i). U, the user key, which will always remain in the user's mobile phone, and an i and I send the certificate signature, SigH(vki, i ). I send this certificate that tells me that here is a way to verify this signature with this key, and this has to be verified with this part VKu, and the certificate has to be verified with VKH. The verification would be verifying these three signatures.
What we have here? We have a helper key VKH that it is like a certification authority key and this is the certificate and in each period there is a new key and this key is used for signing and here is the certificate of the key that is signing. This is very similar to public key infrastructure.
In addition, we have signature of U that it is always at the mobile device and never the helper nor the home computer, and this prevents the helper from being able to sign because we do not want the home computer to sign. So we just manage the trust or what device we trust in a careful way. This is an example how to use more than one device to increase the security against the fact that there are more exposures for mobile devices.
Now, I move to talk about privacy issue and indeed in this ActiveWorld, privacy is lost at many levels and the question is do we need to lose the privacy?
Privacy is when we want to hide, to conceal properties of the entities that are acting rather than concealing messages. On the other hand, if we allow private anonymous transactions, we have to pay a price.
If we have perfect anonymity, it can be dangerous. A privacy primitive should offer various mechanisms that allow conditional revocation of anonymity. We should allow users to usually be anonymous, but if they misbehave, their anonymities can be revoked and canceled. We need a trade-off between privacy and identification. Now, PKI itself always reveals who is the user but we want a PKI public infrastructure in which you do not have to reveal the identity of the user.
Consider the following setting. We have a large system with many anonymous users and many remote verification points. Users issue signatures that get collected and verified in remote points, and we want many anonymous users. We want to support three scenarios of the users.
Scenario 1: The distributed system and numerous verification points and users come and sign transactions. In the first scenario, the authority comes and there is a tracing request. Open the signature to reveal the signer because maybe the transaction here performs something not legal under the anonymity and we want to reveal the signer. This is the first scenario of tracing.
We move to Scenario 2: The name of a bad user X is revealed, and the tracing request is to trace all his transactions. We want to open here and reveal all his transactions. If we try to open all the signatures, it is not good because we are going to find everybody’s signature just because there is one bad guy. We have to develop a specific scenario that just opens the signature of the bad user X, and the other signatures remain private.
In Scenario 3, there is an anonymous signature and we want the user to come and own up to this signature. The user can also open the signature, and identify it as his or her signature. This is Scenario 3 where the claim can be verified as correct.
This led us to design traceable signatures which is anonymous traceable, an anonymous signature scheme unlike usual public infrastructure, where the users can remain anonymous. If something bad happens, then the authority can open a signature, reveal the signer, trace all signatures of a named user but not the other users, and also allows a user to claim and verify the claim. PKI is considered to be the enemy of privacy because users identify themselves, how can they sign without identifying themselves. Traceable signatures gives a solution for this. The design is based on anonymous signing but with signature opening, user tracing and signature claiming capabilities. Currently we are working on integrating it with traditional PKI at the system level. It is not true that if you want user to sign in a way that is recognized by court that is non-repudiated, you immediately need to let them be identified, you can design traceable signatures which are much more friendly to privacy. With new constraints and new problems of privacy, we can actually update our basic designs and get better privacy.
As an application we can have private public key infrastructure in which users usually remain anonymous. Or, we can use it so that users are always anonymous to their service providers so they always sign anonymously and the tracing is done by the billing authority or by the banks that do the billing. You can have service providers not knowing who are the users whereas the billing authorities actually know. Each authority in the infrastructure has different levels of privacy exposure and I think for future applications it is very important. I gave you two examples of current research. One about better protecting keys in this new world, and the second is about better privacy protection.
To conclude, I will say ubiquity is coming. Security in this setting requires some care because there are more constraints. Some basic primitives are under development at least as abstract ideas but in the first part of the talk I said that abstract ideas will find their way into real systems.
Cryptography and security solutions should evolve as the setting and the technology evolve. That is one thing. Second, the earlier the better to include those security and cryptography design consideration in overall system design, and better security solutions may lead to new functions and new possibilities because security is both about protecting applications and also about enabling new mechanisms.
The above is an extract from a lecture by Dr. Moti Yung given at the KCG hall on October 16, 2006.