eWeek – How CIOs Can Make Their Businesses More Competitive

October 10, 2006

URL: http://blog.eweek.com/blogs/bizbytes/archive/2006/10/09/13767.aspx 

When Mark McDonald, head of Gartner’s Executive Programs, took the stage at the Gartner Symposium today in Orlando, the packed house knew immediately what they were in for: a pep rally.

Maybe it’s the fact that McDonald looks like a linebacker. Maybe it’s the fact that he works up a sweat bellowing out his CIO directives. Whatever it is, the guy is downright inspirational. And he had one single message for the IT leaders in the room: Make your company more competitive.

McDonald’s impassioned speech was long on motivation, short on details. But here’s a little factual information that backs up his belief that in 2007 CIOs must advance their companies’ competitive stance or look for other work. The early returns from Gartner’s 2007 CIO survey are starting to come in for the year, and the order of business priorities is as follows:

1.)    Improve business processes

2.)    Reduce operating costs

3.)    Attract and grow customer base

4.)    Support competitive advantage

5.)    Improve enterprise competitiveness

6.)    Grow revenue

7.)    Improve information intelligence

8.)    Deploy business capabilities

9.)    Improve bottom line profitability

10.)   Security and data protection

Just look at those results for a minute. This is what the business is expecting of IT. Grow customer base? Grow revenue? Since when is this stuff IT’s job? Just look at where traditional IT responsibilities fall on this list. Security is dead last.

To this, McDonald had this to say: “You’ve won. Oliver Stone could not have come up with a better conspiracy theory. First you automate their transactions, then you start automating their processes, then you push technology out to the edge of the network. You’ve achieved Borgdom. You’ve won. And that means that competitive advantage is now an IT issue.”

I could have done without yet another Star Trek reference at a technology conference (haven’t we moved beyond that yet?), but the point is well made. Now that IT has become so integral to all aspects of  business operations, it’s time for CIOs to make like business people.

McDonald says the key to this is to stop thinking about IT as a bunch of layers in an enterprise. He uses a cake as an analogy. See the whole cake instead, not layers of the cake. And ask yourself, before you begin any project, about what a customer wants. “Who’s hungry, how do we find them, how do we get the cake to them, how do we charge them and when they are done, how will they get another piece,” he said.

Sounds like a piece of cake, right? Sorry. Bad joke.

Anyway, McDonald reminded CIOs that they must be the idea generators. CEOs and line-of-business managers don’t understand what technology can do. CIOs must bring ideas to the table. And constantly ask themselves: What will be tangibly different about the business when I am finished [with this project]?

Well, if you believe McDonald, IT has bulled its way to a seat at the corporate table now. Now it’s time to prove it really belongs.

Advertisements

InfoWorld: Graham Lovell Talking About Sun & Virtualization

October 5, 2006

According to a recent press release from Sun Microsystems, one company in particular is combining the well planned architecture of Sun’s Sun Fire X4200 Server and the power of virtualization to perform a 22-1 server consolidation, thereby allowing them to combat and reduce power consumption and high heat output by up to 84 percent: NewEnergy is replacing its entire Houston data center, comprised of 22 Intel processor-based servers, with two Sun Fire X4200 servers powered with the Dual-Core AMD Opteron processor, and running the Solaris 10 OS. NewEnergy’s Houston data center performs CPU-intensive Grid computing simulations for its customers nationwide, which mirror real-world electric Grids in order to plan for potential disasters. Trial results demonstrated the Sun Fire X4200 servers as being much faster than other servers which is partially credited to the Solaris 10 OS’s efficiency over memory-intensive applications running the Windows OS.Sun and VMware have combined efforts to provide innovation and deliver proven virtual infrastructure solutions for enterprise computing. Leveraging the power of VMware Infrastructure 3, Solaris 10 and the Sun Fire series of servers, customers can maximize performance and reduce overall cost of ownership via server consolidation, business continuity, and test or development solutions. Combining these products, IT managers are given a complete solution to help increase their server utilization, improve performance and reduce costs while making better use of their data center resources, like space, cooling and power consumption. I recently had the pleasure of speaking with Graham Lovell, Senior Director of the Systems Group for Sun Microsystems. I wanted to find out more about Sun and to get his take on the whole virtualization scene, specifically software licensing, emerging trends, and customer needs in the virtualization space.


David Marshall: In your own words, what is Sun’s strategy towards virtualization?Graham Lovell: The first thing we need to establish is what we mean by virtualization and how we communicate it. We need to define it with customers in different circumstances.Customers generally look to improve the utilization of their servers. They want to run multiple applications and different operating systems. The idea is to snapshot what they have in a piece of hardware and then run it on another system in a virtualized way.They can see the benefits of running virtualized environments, but they have to support it. They need management tools to run it well.It is important that suppliers such as Sun can provide a range of options on different multiple operating systems. We have SPARC and x86 product lines. With Solaris 10, we have containers. It lets you run isolated systems where each one thinks its running on a dedicated system. If you are running Xen or VMware, you aren’t running multiple software copies.

This has been popular with customers running Solaris and SPARC and Solaris on x86 platforms.

The next choice is that customers can select VMware. VMware has a number of new products, but people think it is a single solution. When we talk to customers about experiencing VMware, some of them may have just heard of it. That is when we can talk about different styles of implementation.

Customers are seeing the benefit on how they can mix VMware. They talk about pooling resources in the data center so one can then resource data across several servers. This makes it easier to move applications around and help with capacity planning. Virtualization can help you install pool behavior.


David: Are you finding that people are using Solaris containers to do the same thing as VMware? Such as for development and test or support? Or are they strictly using it for server consolidation?Graham: Customers look at virtualization to test and debug applications across a range of application systems. That is where the customer can be more sophisticated in their choice with VMware or Xen. With VMware, you can see things have more choice. Xen is up and coming. It is embedded in a number of operating systems. It has interesting and new budget tools. I think Xen will have an interesting future virtualization stack as well.Containers are typically rolled out in an application environment.
David: How does virtualization impact software licensing?Graham: The software industry is reeling from pricing multiple cores per processor. Microsoft has strong policies around pricing cores. Virtualization software subdivides a processor into pieces of CPU. Vendors then argue why do they pay for the whole software when they only use a fraction of it?Value-based pricing is a more reasonable way to charge for software. I think Microsoft is one of the first to come out with policies around virtualized environments.

David: I agree with you. Software licensing will have to change. People are using virtual machines for things such as disaster recovery options and software companies will have to adapt.Graham: Without the flexibility in licensing, customers may find themselves paying more for the software. They moved the software from a 2-core system to an 8-core system. Virtualized environments have bigger engines. They need to make sure they don’t fall far of software restrictions.Customers need to go back to their ISVs and say, is it ok if I can move from 2-cores to 4? Then you have a start for negotiation.Sun has an enterprise system where you charge by the number of employees in the company. It doesn’t matter how much hardware you run, it’s a site-based license with lots of flexibility.


David: What do you think is driving the demand for virtualization today?Graham: I got this Windows NT application. The problem is I can no longer get hardware that can run the physical operating system. You can’t buy old hardware that will run this new software. Legacy reasons are one of the key drivers for virtualization.Server sprawl also generates too much heat and uses too much power. If I consolidate them, I can then improve the use of space, heat and power in the data center.When customers think of disaster planning, they need to easily migrate applications across platforms. If one data center has a problem, it’s easier to migrate in a virtual environment than a non-virtual one.Virtualization also offers more flexibility. When a business comes along and the IT department needs to respond quickly to business needs – virtualization can ramp things up.
David: I’ve seen problems using VMware and Xen with patch management. Since the containers approach is based on one operating system, would that solve part of the patch management problem? It seems like instead of having to patch multiple areas, you just have to patch one.Graham: The flip side is that it runs the same kernel code. So it is all consistent. But you can apply different patches into the user space. You can’t have multiple kernels. If you make any changes, it is reflected across the containers. Then you may want to run VMware with several implementations of Solaris. Then you have a patch level in one instance of Solaris.

David: Can you leave us with a good customer example?Graham: The one that gets my juices going is New Energy Associates.Neal Tisdale, Vice President of Software Development of NewEnergy Associates, consolidated 22 Dell servers to 2 Sun servers. He cut down not just the number of systems, but he cut down on heat, power and physical space. He then managed a server consolidation environment. That is the low-hanging fruit for customers. They can do better with modern technology and make a huge energy cost savings. Computing is underutilized by customers.There are significant benefits to making that change and pushing people to experiment.

Network World – The Server Strategy – Virtualization

October 5, 2006

11/01/2006 11:55:04
IT execs who have delayed virtualizing their x86-based servers for fear the technology is still unproven should put that project at the top of their to-do lists for 2006, as the market for virtualizing the low-volume systems heats up.

It’s a combination of factors – the increasing power and stability of the x86 platform, the maturing of virtualization software and a growing choice of software vendors – that is driving adoption at a surprisingly fast clip, analysts say.

“In 2005 I saw a lot of enterprises dabbling with virtualization in test and development environments, particularly for server consolidation and cost savings,” says Scott Donahue, an analyst at Tier 1 Research. “What has surprised me more recently when I’ve talked to enterprise clients is the speed at which virtualization has actually moved into production environments.”

IDC describes the shift to x86-based server virtualization as well underway and expects widespread adoption to take place during the next couple of years, without “a five- to 10-year gradual market shift as in other technology areas.” Companies lacking a virtualization strategy for low-end systems will \pay more in the long run, in hardware costs and management headaches, analysts say.

Gartner, for example, estimates that most x86-based servers running a single application – the traditional deployment for these low-end boxes – operate at about a 10 percent average utilization rate. Using virtualization to consolidate workloads into a single box should increase utilization significantly.

In addition, as the x86 platform itself becomes more powerful, customers should find a growing list of applications appropriate for a virtualized environment. In the last couple of years, systems vendors stepped up the performance of their low-end systems with dual-core processors and 64-bit support. This year will bring servers with virtualization technology built into the silicon, a huge step for the x86 platform, which today can only be virtualized with some fancy – and performance-draining – footwork from software vendors such as VMware and Microsoft.

Having virtualization capabilities hard-wired into the chip means end users will get better performance out of virtual servers, software files that contain an operating system and applications. It also means that VMware and its competitors likely will shift their focus to management tools, resulting in more advanced management capabilities down the road.

Today’s management tools enable end users to easily move and copy virtual servers, providing a simple approach to disaster recovery and high availability. But advanced capabilities – such as a faster and more seamless migration of virtual servers among physical systems – are likely to come in the months ahead. Analysts recommend that customers take a close look at management strategies when they choose a virtualization partner.

“In the next year and a half to two years, the market will be flipping on its head completely. . . . It will shift from the hypervisor [low-level virtualization technology] to management,” says Tom Bittman, a Gartner vice president and Fellow. “So the focus should be on choosing management tools and automation, not on choosing a hypervisor. That will be a commodity.”

Another development that makes 2006 a key year for deploying x86 server virtualization is movement among the independent software vendors to make licensing in a virtual environment more user-friendly. Microsoft, for example, late last year announced a new virtualization-licensing model that stands to slash costs for end users. Though analysts note that this is a small first step in an evolving discussion, it’s encouraging to see Microsoft make an early move, industry experts agree.

Those still unsure if server virtualization on x86 systems has moved beyond hype should consider that open source is getting in on the game, with XenSource announcing its first commercial product designed to make it easier for customers to deploy and manage the open source Xen VM technology in corporate networks.

Although VMware has held a nearly uncontested leadership position since 2001, when it introduced the industry’s first virtualization software for x86-based servers, 2006 will bring end users more options in virtualizing low-end systems. That’s good news from both a price and a performance standpoint.

Software from Microsoft, SWsoft and start-ups such as Virtual Iron and XenSource offer interesting alternatives. With the underlying virtualization technology becoming available in hardware, management tools from companies such as PlateSpin, Leostream and Platform Computing deserve a closer look. Analysts also expect systems vendors such as Dell and HP to intensify their focus on this area.

Ulrich Seif, CIO at National Semiconductor in Santa Clara, Calif., says Intel’s and AMD’s plans to incorporate virtualization into their processors, and the maturing of virtualization software’s features, make slicing and dicing x86 servers a smart move, regardless of the vendor.

Seif brought in VMware last year to consolidate an increasing number of Windows servers and says he already has seen a 33 percent savings and now has an architecture that is flexible and easier to manage. ” Almost more importantly, [with server virtualization] you are positioning yourself for future [architectures] that will come natural[ly] with virtualization: true grid computing (with solid management tools); ultimate virus and intrusion detection (the host scanning guest memory for patterns); and software and configuration management,” he says.

Gartner – 2006 Emerging Technologies Hype Cycle

September 27, 2006

Gartner, Inc., today announced its 2006 Emerging Technologies Hype Cycle which assesses the maturity, impact and adoption speed of 36 key technologies and trends during the next ten years. This year’s hype cycle highlights three major themes that are experiencing significant activity and which include new or heavily hyped technologies, where organisations may be uncertain as to which will have most impact on their business.

The three key technology themes identified by Gartner, and the corresponding technologies for enterprises to examine closely within them, are:

1. Web 2.0

Web 2.0 represents a broad collection of recent trends in Internet technologies and business models.  Particular focus has been given to user-created content, lightweight technology, service-based access and shared revenue models.  Technologies rated by Gartner as having transformational, high or moderate impact include:

Social Network Analysis (SNA) is rated as high impact (definition: enables new ways of performing vertical applications that will result in significantly increased revenue or cost savings for an enterprise) and capable of reaching maturity in less than two years. SNA is the use of information and knowledge from many people and their personal networks. It involves collecting massive amounts of data from multiple sources, analyzing the data to identify relationships and mining it for new information. Gartner said that SNA can successfully impact a business by being used to identify target markets, create successful project teams and serendipitously identify unvoiced conclusions.

Ajax is also rated as high impact and capable of reaching maturity in less than two years. Ajax is a collection of techniques that Web developers use to deliver an enhanced, more-responsive user experience in the confines of a modern browser (for example, recent version of Internet Explorer, Firefox, Mozilla, Safari or Opera). A narrow-scope use of Ajax can have a limited impact in terms of making a difficult-to-use Web application somewhat less difficult.  However, Gartner said, even this limited impact is worth it, and users will appreciate incremental improvements in the usability of applications.  High levels of impact and business value can only be achieved when the development process encompasses innovations in usability and reliance on complementary server-side processing (as is done in Google Maps).

Collective intelligence, rated as transformational (definition: enables new ways of doing business across industries that will result in major shifts in industry dynamics) is expected to reach mainstream adoption in five to ten years. Collective intelligence is an approach to producing intellectual content (such as code, documents, indexing and decisions) that results from individuals working together with no centralized authority. This is seen as a more cost-efficient way of producing content, metadata, software and certain services.

Mashup is rated as moderate on the Hype Cycle (definition: provides incremental improvements to established processes that will result in increased revenue or cost savings for an enterprise), but is expected to hit mainstream adoption in less than two years. A “mashup” is a lightweight tactical integration of multi-sourced applications or content into a single offering. Because mashups leverage data and services from public Web sites and Web applications, they’re lightweight in implementation and built with a minimal amount of code. Their primary business benefit is that they can quickly meet tactical needs with reduced development costs and improved user satisfaction. Gartner warns that because they combine data and logic from multiple sources, they’re vulnerable to failures in any one of those sources.

2. Real World Web 

Increasingly, real-world objects will not only contain local processing capabilities—due to the falling size and cost of microprocessors—but they will also be able to interact with their surroundings through sensing and networking capabilities. The emergence of this Real World Web will bring the power of the Web, which today is perceived as a “separate” virtual place, to the user’s point of need of information or transaction. Technologies rated as having particularly high impact include:

Location-aware technologies should hit maturity in less than two years. Location-aware technology is the use of GPS (global positioning system), assisted GPS (A-GPS), Enhanced Observed Time Difference (EOTD), enhanced GPS (E-GPS), and other technologies in the cellular network and handset to locate a mobile user. Users should evaluate the potential benefits to their business processes of location-enabled products such as personal navigation devices (for example, TomTom or Garmin) or Bluetooth-enabled GPS receivers, as well as WLAN location equipment that may help automate complex processes, such as logistics and maintenance. Whereas the market sees consolidation around a reduced number of high-accuracy technologies, the location service ecosystem will benefit from a number of standardized application interfaces to deploy location services and applications for a wide range of wireless devices.

Location-aware applications will hit mainsteam adoption in the next two to five years. An increasing number of organizations have deployed location-aware mobile business applications, mostly based on GPS-enabled devices, to support queue business processes and activities, such as field force management, fleet management, logistics and good transportation. The market is in an early adoption phase, and Europe is slightly ahead of the United States, due to the higher maturity of mobile networks, their availability and standardization.

Sensor Mesh Networks are  ad hoc networks formed by dynamic meshes of peer nodes, each of which includes simple networking, computing and sensing capabilities. Some implementations offer low-power operation and multi-year battery life. Technologically aggressive organizations looking for low-cost sensing and robust self-organizing networks with small data transmission volumes should explore sensor networking. The market is still immature and fragmented, and there are few standards, so suppliers will evolve and equipment could become obsolete relatively rapidly. Therefore, this area should be seen as a tactical investment, as mainstream adoption is not expected for more than ten years.

3. Applications Architecture 

The software infrastructure that provides the foundation for modern business applications continues to mirror business requirements more directly. The modularity and agility offered by service oriented architecture at the technology level and business process management at the business level will continue to evolve through high impact shifts such as model-driven and event-driven architectures, and corporate semantic Web. Technologies rated as having particularly high impact include:

Event-driven Architecture (EDA) is an architectural style for distributed applications, in which certain discrete functions are packaged into modular, encapsulated, shareable components, some of which are triggered by the arrival of one or more event objects. Event objects may be generated directly by an application, or they may be generated by an adapter or agent that operates non-invasively (for example, by examining message headers and message contents).EDA has an impact on every industry. Although mainstream adoption of all forms of EDA is still five to ten years away, complex-event processing EDA is now being used in financial trading, energy trading, supply chain, fraud detection, homeland security, telecommunications, customer contact center management, logistics and sensor networks, such as those based on RFID.

Model-driven Architecture is a registered trademark of the Object Management Group (OMG). It describes OMG’s proposed approach to separating business-level functionality from the technical nuances of its implementation  The premise behind OMG’s Model-Driven Architecture and the broader family of model-driven approaches (MDAs) is to enable business-level functionality to be modeled by standards, such as Unified Modeling Language (UML) in OMG’s case; allow the models to exist independently of platform-induced constraints and requirements; and then instantiate those models into specific runtime implementations, based on the target platform of choice. MDAs reinforce the focus on business first and technology second. The concepts focus attention on modeling the business: business rules, business roles, business interactions and so on. The instantiation of these business models in specific software applications or components flows from the business model. By reinforcing the business-level focus and coupling MDAs with SOA concepts, you end up with a system that is inherently more flexible and adaptable.

Corporate Semantic Web applies semantic Web technologies, aka semantic markup languages (for example, Resource Description Framework, Web Ontology Language and topic maps), to corporate Web content. Although mainstream adoption is still five to ten years away, many corporate IT areas are starting to engage in semantic Web technologies. Early adopters are in the areas of enterprise information integration, content management, life sciences and government. Corporate Semantic Web will reduce costs and improve the quality of content management, information access, system interoperability, database integration and data quality.

“The emerging technologies hype cycle covers the entire IT spectrum but we aim to highlight technologies that are worth adopting early because of their potentially high business impact,” said Jackie Fenn, Gartner Fellow and inventor of the first hype cycle. One of the features highlighted in the 2006 Hype Cycle is the growing consumerisation of IT. “Many of the Web 2.0 phenomenon have already reshaped the Web in the consumer world”, said Ms Fenn. “Companies need to establish how to incorporate consumer technologies in a secure and effective manner for employee productivity, and also how to transform them into business value for the enterprise”.  

The benefit of a particular technology varies significantly across industries, so planners must determine which opportunities relate most closely to their organisational requirements. To make this easier, a new feature in Gartner’s 2006 hype cycle is a ‘priority matrix’ which clarifies a technology’s potential impact – from transformational to low – and the number of years it will take before it reaches mainstream adoption. “The pairing of each Hype Cycle with a Priority Matrix will help organisations to better determine the importance and timing of potential investments based on benefit rather than just hype,” said Ms Fenn.

Vertical SOA Solutions – I

September 27, 2006

Okay, at this point of time, each one of software vendors and consulting firms is embracing SOA in their product platform to be more politically correct in the process of enterprise strategy formulation and etc. – we have seen this happen from time to time with all types of marketing pitch all over the web.

What would be the essential in terms of getting SOA introduced to various companies in vertical industries? Here comes Vertical SOA Solutions – this is what we are currently building and implementing in healthcare/insurance industry.

First off, to start with the essential backbone of the overall SOA enablement platform – Enterprise Service Bus (ESB). There is quite a bit of hype around both SOA and ESB. Conceptually, ESB is actually one of the major implementation of SOA. As the base platform for SOA, ESB provides the fundamental enterprise messaging backbone, which consists of hub and spoke based message brokers/integration servers or distributed message bus – they are essentially product implementation of JMS specifications plus vendor specific messaging features and enhancements. Features like guaranteed delivery and once and only once, etc., would sound familiar to you at this point. However, having said all of this, that is at a very high level. The overall capability of ESB is to provide a common enterprise service fabric where enterprise level connectivity and message visibility and delivery can be easily facilitated, and enterprise service end points can virtually reside anywhere on this service fabric. One of many misconceptions is to consider a J2EE application server as the ESB platform, including those market leaders. For those with such understanding, thinking outside of the box would be the first step. J2EE itself is a stack of technologies and the application server has limited capabilities to provide enterprise-class messaging functionality. The lack of real-world experiences and skills in traditional EAI/B2Bi could be one of the main contributors to such misconception. Without the overall direction on SOA/ESB, it would pose tremendous risk in terms of SOA application development and implementation at enterprise level.

So far we have, I hope, achieved a SOLID understanding of SOA and ESB at conceptual level. Therefore, the SOA conceptual architecture is the foundation of the overall enterprise architecture driving increasing ROI with enterprise service reusability and value creation through existing IT assets.

Second, to take this conceptual understanding to the next level, you would want to have a critical path or roadmap in front of you for the whole enterprise.  Here comes SOA lifecycle management and governance processes. Through initial SOA assessment, the enterprise has the opportunity to gain a fresh understanding through the gap analysis. From there, a real-world SOA roadmap is on its way. We will talk about this in another post.

Third, I understand that it is more interesting to discuss the vendor product mapping to the conceptual architecture to deliver logical and physical architecture blueprints. With all those experienced professionals (not including trained only) who possess sigificant large complex project implementation experiences, it would be no brainer for them to narrow down from the tier-1 players in this field:

  • webMethods: Fabric is a well integrated platform for ESB with both EAI and B2Bi well proven in the marketplace, with recent acquisition of Infravio.
  • Sun Microsystems/SeeBeyond: JCAPS, the latest release of SeeBeyond product suite after ICAN, delivers much improvement in both architecture and development.
  • TIBCO: BusinessWorks is still a young product compared to the rest and the performance and stability are the main areas for improvement.
  • IBM: WebSphere Business Integration (WBI) product family has a wide range of products through acquisition – the product integration itself remains to be proven.

These vendors are thriving very hard to maintain the market leadership in existing and emerging markets as well as invest heavily in overall product suite through both in-house product development and merger and acquisition. Each of them has distinctive product components, however, it would be fairly straightforward to map physical product components to the conceptual architecture.

Next, we will continue on some important architectural components such as BPM, BAM, EAI, B2Bi, Service Registry and so on.