Andrew Nash is the chief technology officer at Reactivity Inc. He is the co-author of numerous Web Services specifications including WS-Security, WS-Trust, WS-Federation, WS-SecureConversation and WS-SecurityPolicy. Previously he was a director of technologies at RSA Security. He has also authored a book on public key infrastructure.
What happened in 2005 that took you by surprise?
Andrew Nash: The thing that was pleasantly surprising for us about 2005 was that the general understanding of Web services switched very strongly to appliances. People had really bought into that fact, for the first time, that you could do a bunch of this application-level processing in the network.
One of the things that we'd been postulating for a while was this thought that you could potentially start to do some function that looked like business logic in the network and we started to find that you could perform functions like routing of messages based on content to specific Web services. You could actually look at it and say "Oh look, this is a transaction that exceeds my threshold," I'd like to send that to the manual verification Web service, for example. One of the things you begin to see is that as you begin to put logic that is coded or handled or defined by an enterprise onto these appliances, that begins to be a very interesting way to deal with Web services.
Another pleasant surprise was we finally got the Microsoft/IBM guys to put the rest of the security specs, other than WS-Federation, into OASIS.
Who is it that you find is warming up to the notion that you can use gear to address application-level issues? Is it developers and architects or is it IT managers?
Nash: The majority of the people who are looking at this are IT and application-level architects. I think realization about how this stuff is going to work at the IT operations level is still to come. If you talk to your average IT operations folks about Web services or XML I don't think the first thing they would say is "Oh yes, next week we plan to buy one of these boxes."
At this moment in time, how plugged into the mechanics of SOA are the folks at the executive level?
Nash: The executives are much more focused at the level of do you want to build service-oriented architecture, are you going to use Web services to connect your partners? In terms of whether an appliance gets used here, the CTOs, maybe some CIOs, are starting to get into that picture, but at the moment they're thinking about these questions at a much higher level.
I think the interesting question for 2006 is where is operation control of SOA going to reside? I went to two customer sites over the last two days and I was surprised that the network operations people were not brought into what was a very large review of some products, because almost every other place I've ever been there was always one or two network ops people with the management folks that were in the room as part of the process of evaluating these products. What I see happening at the moment is this: Enterprises are still trying to work if this is a network ops kind of play, if it's a security-defined kind of role or is it sort of applications architects ownership, maybe applications operations that are going to own this. It's fairly distributed at the moment.
What I think will happen in 2006 is there will be a realization that an operations group that crosses those boundaries needs to be created to deal with the various aspects of Web services. In some places they'll add application expertise to the network ops folks. In some places they'll go to the security folks and give them some operations people. Or it may be some amalgam where they create cross-functional groups.
XML networking vendors can be hard to pin down. Some, like Reactivity, sell gear while others sell software. How does it all fit together?
Nash: We're finding there's two ways you need to think about this problem. At the first level, there's the general purpose software or network agents running on a set of unique, special purpose [Application-Specific Integrated Circuits]. The second level that floats over that is a network appliance, or perhaps a better phrase would be a network intermediary, where we're talking about components that reside, architecturally, on the edge of the platform.
What I often see happen is those two areas, for understandable reasons, tend to get mixed together. Let me deal with network agents. In that space what had been believed for a while, particularly with folks like DataPower, was that you needed to do something that looked a lot like the early days of routing hardware. In order to do that you needed to build special purpose hardware to address the problem. Yet any startup is going to be in a tough position to continue to innovate at a hardware level which would allow them to stay ahead of the kind of performance level you could get out of an Intel or an AMD.
So rather than focus on hardware, let's focus on really smart algorithms and really effective ways to deal with caching and reuse of information and hang out for the Intel performance curve which is invariably going to come. In terms of smart algorithms we do a lot that's associated with caching. We actually cache knowledge about the fingerprints of the Web services messages we're seeing and we see within three or four similar messages we're only taking 10% of the time to process the message that it took us to process the first one. So we do things like cached SAML insertions and LDAP queries, which reduce the amount of network traffic that's going on.
So where does the network gear come into play?
Nash: For the network intermediaries we found that for customers who are trying to run XML-based processing, up to 90% of the overhead was being consumed in doing things like schema validation or detection of the tags or identity-based integration or signing or encryption, whatever it was that dealt with the sort of messagy little aspects of XML. Only 10% of their processors do the application logic and actually run the stuff behind the Web services. For all that generic kind of processing that's focused on the messages themselves, by moving that away from the platform and into the network intermediary, you show some dramatic improvement in the effectiveness of your platforms.
If you look at BEA or IBM or Oracle, pretty much anybody that's doing platform stuff, if you poke at it hard you find that they get really squishy about what their real transaction rates are. In fact one of the major reasons for IBM acquiring DataPower was that IBM's WebSphere backend for dealing with XML was just slow. You really want to move that processing, particularly if it's policy-driven and you can do it across multiple platforms, out of the platforms and into the network so you create an effective boundary. Also, for dealing with things like threat-based security, what you find when you're trying to deal with a denial of service attack on a platform is that by the time you realize you have one, your platform's already in a heap. You really want to move threat-based security away from the platform so you've created a protective shield around the servers who are actually dealing with these services. For a whole host of reasons dealing with these things with network intermediaries is a much smarter thing to do than with anything that runs on the platform itself, bearing in mind that there are always a class of problems that you'll want to solve on the platform. Those tend to be more related to business logic, workflow or business process execution.
This story also appears at SearchWebServices.com, part of the TechTarget network.