Four basic elements of autonomic computing

By reading this paper you can gain the knowledge about Autonomic Computing. I detailed it under the following sub topics.

 1.0 Introduction
 2.0 Why need Autonomic?
 3.0 Major Researches 
           3.1 Research projects in Autonomic computing.
           3.2 University research projects in autonomic computing
4.0 Four basic elements of autonomic computing
           4.4 self-protecting
           AutonomicComputing Vs Current Computing
5.0 Autonomic computing architecture
6.0. Autonomic computing today
           6.1. Initiatives of autonomic computing 
           6.2. Benefits
           6.3. Applications
7.0 What happen if does not have autonomic in the future?
8.0 Autonomic Computing Research Issues and Challenges
9.0 Future direction of Autonomic computing
10.0 Conclusion

4.0 Four basic elements of autonomic computing

By examine the above eight characteristics researches identifies that autonomic computing has four basic elements: self-configuring, self-healing, self-optimizing, and self-protecting.

4.1 Self-Configuring

An autonomous computing system must be able to install and set up software automatically. To do so, it will utilize dynamic software configuration techniques, which means applying technical and administrative direction and surveillance to identify and document the functional and physical characteristics of a configurable item. Also to control changes to those characteristics, to record and report change processing and implementation status, and to verify compliance with specified service levels. Also, downloading new versions of software and installing regular service packs are required. When working with other autonomous components, an autonomous system will update new signatures for virus protection and security levels. Self-configuration will use adaptive algorithms to determine the optimum configurations.
1. Updating Web pages dynamically with software changes, testing those changes, analyzing the results, releasing the system back into production, and reporting back to self-management whether the procedure was successful.
2. Installation, testing, and release of regular vendor service packs.
3. Installation of vendor patches, corrections, and modifications together with the necessary testing and release.
4. Installation of new software releases—automatically and seamlessly.

4.2 Self-optimizing

An autonomous system will never settle for the status quo. It will be constantly monitoring predefined system goals or performance levels to ensure that all systems are running at optimum levels. With the business constantly changing and demands from customers and suppliers changing equally fast, self-adapting requirements will be needed.

Self-optimization will be the key to allocating e-utility-type resources, determining when an increase in processing cycles is needed, how much in needed, where they are needed, and for how long. To be effective, autonomous self-optimization will need advanced data and feedback. The metrics need to be in a form where rapid analysis can take place. Many new and innovative techniques are needed for optimization to be successful. For example, control theory is needed in new autonomous infrastructures. New algorithms to process control decisions will be needed.
1. Calling for additional processing power from the e-utility when needed. Releasing those additional cycles when peaks are over.
2. Working with outside vendor software.
3. Interfacing with other autonomic modules to exchange data and files.
4. Optimum sub-second response times for all types of access devices, such as personal computers, handheld devices, and media phones.

4.3 Self-healing

Present computer systems are very weak. They fail at the smallest amount problem. If a period, a comma, or a bracket is not correct, the software will fail. We still have much to do in designing tolerant systems. Autonomous computing systems will have the ability to discover and repair potential problems to ensure that the systems run smoothly.

With today's complex IT architectures, it can be hours before a problem is identified at the root cause level. System staff members need to pore over listings of error logs and memory dumps, tracing step-by-step back to the point of failure. The cost of downtime to the business is too expensive. For example, in large-scale banking networks, the cost can be as much as $2,600,000 per hour. Self-healing systems will be able to take immediate action to resolve the issue, even if further analysis is required. Rules for self-healing will need to be defined and applied. As autonomous systems become more sophisticated, embedded intelligence will be applied to discover new rules and objectives. For example, recall from the pervious section IBM will be building SMART (Self-Managing and Resource Tuning) databases into upcoming versions of their DB2 database product. This database is designed to run with less need for human intervention. For example, the user can opt not to be involved, and the database will automatically detect failures when they occur and configure itself by installing operating systems and data automatically to cope with the changing demands of e-business and the Internet [19].
1. Self-correcting Job Control Language (JCL): when a job fails, the errors or problems are identified and jobs rerun without human intervention.
2. An application error forces the entire system to halt. After root cause analysis, the error is corrected, recompiled, tested, and moved back into production.
3. A database index fails. The files are automatically re-indexed, tested, and loaded back into production.
4. Automatically extend file space and database storage, according to previous data on growth and expansion.

4.4 Self-protecting

In an increasingly hostile corporate world, autonomous systems must identify, detect, and protect valuable corporate assets from numerous threats. They must maintain integrity and accuracy and be responsible for overall system security. For years before the Internet, each corporation was an isolated island where threats usually came from within.

Now, outside threats come daily, and security and protection are paramount. Threats must be identified quickly and protective action taken.

Autonomic system solutions must address all aspects of system security at the platform, operating system, network, application, Internet, and infrastructure levels. This involves developing new cryptographic techniques and algorithms, their secure implementation, and designing secure networking protocols, operating environments, and mechanisms to monitor and maintain overall system integrity. Such security solutions need to be standardized to provide/preserve interoperability and to ensure that these techniques are used in a correct way.

To achieve this will require continuous sensors feeding data to a protection center. A log of events will be written and accessed when appropriate for audit purposes. To manage the threat levels, we might expect a tiered level. Threats can be escalated through the tiers for increasing action and priority.
1. Confirm the ability of backup and recovery resources that may be needed.
2. Implement tiered security levels.
3. Focus resources on network monitoring and immediately disconnect computer systems with suspicious network traffic.
4. Verify that network configurations inventories are correct and, if not, take action.
5. Contact system administrators outside of autonomous system and other offices that may be affected by the increasing threat levels.
6. Have the system verify that all computer systems are at the appropriate version levels, including "patches." Update automatically as needed.
7. Resolve any open security concerns.
8. Implement any special software for additional security protection according to the threat level.
9. Contact offsite vendors to determine if any preventive measures (patches, etc.) to be applied to both hardware and software.

In next chapter Table 4.1 compares the four states of autonomic computing with how we manage today and what it will be like with full autonomic systems.

No comments: