By reading this paper you can gain the knowledge about Autonomic Computing. I detailed it under the following sub topics.
1.0 Introduction
2.0 Why need Autonomic?
3.0 Major Researches
3.1 Research projects in Autonomic computing.
3.2 University research projects in autonomic computing
4.0 Four basic elements of autonomic computing
4.1self-configuring
4.2self-healing
4.3self-optimizing
4.4 self-protecting
AutonomicComputing Vs Current Computing
5.0 Autonomic computing architecture
6.0. Autonomic computing today
6.1. Initiatives of autonomic computing
6.2. Benefits
6.3. Applications
7.0 What happen if does not have autonomic in the future?
8.0 Autonomic Computing Research Issues and Challenges
9.0 Future direction of Autonomic computing
10.0 Conclusion
3.2 University research projects in autonomic computing
IBM Research contributes to university research in a number of ways, including awarding money and equipment grants. The following projects are funded and co-managed by IBM research and other units within IBM.
1. OceanStore by Berkeley University of California
OceanStore is a global persistent data store designed to scale to billions of users. It provides a consistent, highly available, and durable storage utility atop an infrastructure comprised of untrusted servers. Any computer can join the infrastructure, users need only subscribe to a single OceanStore service provider, although they may consume storage and bandwidth from many different providers. Researchers at Berkeley are exploring the space of Introspective Computing—systems that perform continuous, online adaptation. Applications include on-chip tolerance of flaky components and continuous optimization to adapt to server failures and denial of service attacks as well as autonomic computing. [23]
2. Recovery-Oriented Computing by Berkeley University of California
Recovery-Oriented Computing (ROC) project is a joint Berkeley/Stanford research project that is investigating novel techniques for building highly dependable Internet services. ROC emphasizes recovery from failures rather than failure-avoidance. This philosophy is motivated by the observation that even the most robust systems still occasionally encounter failures due to human operator error, transient or permanent hardware failure, or software anomalies resulting from software aging. [21]
3. Anthill project by University of Bologna, Italy
Anthill is a framework built to support the design, implementation, and evaluation of peer-to-peer (P2P) applications. P2P systems are characterized by decentralized control, large scale, and extreme dynamism of their operating environment and can be seen as instances of Complex Adaptive Systems, typically found in biological and social sciences. Anthill exploits this analogy and advocates a methodology whereby the desired application properties correspond to the "emergent behavior" of the underlying complex adaptive system. An Anthill system consists of a dynamic network of peer nodes; societies of adaptive agents travel through this network, interacting with nodes and cooperating with other agents in order to solve complex problems. Anthill can be used to construct different classes of P2P services that exhibit resilience, adaptation, and self-organization properties. [5]
4. Software Rejuvenation by Duke University:
Software rejuvenation is a proactive fault management technique aimed at cleaning up a system's internal state to prevent the occurrence of more severe crash failures in the future. It involves occasionally terminating an application or a system, cleaning its internal state, and restarting it. Current methods of software rejuvenation include system restart, application restart (partial rejuvenation), and node/application failover (in a cluster system). Software rejuvenation is a cost-effective technique for dealing with software faults that include protection not only against hard failures, but against performance degradation as well. Duke University collaborated with IBM to develop the IBM Director Software Rejuvenation tool. [6]
5. Bio-Inspired Approaches to Autonomous Configuration of Distributed Systems, University College London, England
Next generation networks require new control techniques to increase automation and deal with complexity. Active networks in particular will require the management and control systems to evolve extremely rapidly, since users will be continuously adding new applications, services, and virtual configurations. This research is exploring novel ad-hoc distributed control algorithms and architectures derived from biological and geophysical systems and measurements of fabricated systems such as the World Wide Web. [16]
1.0 Introduction
2.0 Why need Autonomic?
3.0 Major Researches
3.1 Research projects in Autonomic computing.
3.2 University research projects in autonomic computing
4.0 Four basic elements of autonomic computing
4.1self-configuring
4.2self-healing
4.3self-optimizing
4.4 self-protecting
AutonomicComputing Vs Current Computing
5.0 Autonomic computing architecture
6.0. Autonomic computing today
6.1. Initiatives of autonomic computing
6.2. Benefits
6.3. Applications
7.0 What happen if does not have autonomic in the future?
8.0 Autonomic Computing Research Issues and Challenges
9.0 Future direction of Autonomic computing
10.0 Conclusion
3.2 University research projects in autonomic computing
IBM Research contributes to university research in a number of ways, including awarding money and equipment grants. The following projects are funded and co-managed by IBM research and other units within IBM.
1. OceanStore by Berkeley University of California
OceanStore is a global persistent data store designed to scale to billions of users. It provides a consistent, highly available, and durable storage utility atop an infrastructure comprised of untrusted servers. Any computer can join the infrastructure, users need only subscribe to a single OceanStore service provider, although they may consume storage and bandwidth from many different providers. Researchers at Berkeley are exploring the space of Introspective Computing—systems that perform continuous, online adaptation. Applications include on-chip tolerance of flaky components and continuous optimization to adapt to server failures and denial of service attacks as well as autonomic computing. [23]
2. Recovery-Oriented Computing by Berkeley University of California
Recovery-Oriented Computing (ROC) project is a joint Berkeley/Stanford research project that is investigating novel techniques for building highly dependable Internet services. ROC emphasizes recovery from failures rather than failure-avoidance. This philosophy is motivated by the observation that even the most robust systems still occasionally encounter failures due to human operator error, transient or permanent hardware failure, or software anomalies resulting from software aging. [21]
3. Anthill project by University of Bologna, Italy
Anthill is a framework built to support the design, implementation, and evaluation of peer-to-peer (P2P) applications. P2P systems are characterized by decentralized control, large scale, and extreme dynamism of their operating environment and can be seen as instances of Complex Adaptive Systems, typically found in biological and social sciences. Anthill exploits this analogy and advocates a methodology whereby the desired application properties correspond to the "emergent behavior" of the underlying complex adaptive system. An Anthill system consists of a dynamic network of peer nodes; societies of adaptive agents travel through this network, interacting with nodes and cooperating with other agents in order to solve complex problems. Anthill can be used to construct different classes of P2P services that exhibit resilience, adaptation, and self-organization properties. [5]
4. Software Rejuvenation by Duke University:
Software rejuvenation is a proactive fault management technique aimed at cleaning up a system's internal state to prevent the occurrence of more severe crash failures in the future. It involves occasionally terminating an application or a system, cleaning its internal state, and restarting it. Current methods of software rejuvenation include system restart, application restart (partial rejuvenation), and node/application failover (in a cluster system). Software rejuvenation is a cost-effective technique for dealing with software faults that include protection not only against hard failures, but against performance degradation as well. Duke University collaborated with IBM to develop the IBM Director Software Rejuvenation tool. [6]
5. Bio-Inspired Approaches to Autonomous Configuration of Distributed Systems, University College London, England
Next generation networks require new control techniques to increase automation and deal with complexity. Active networks in particular will require the management and control systems to evolve extremely rapidly, since users will be continuously adding new applications, services, and virtual configurations. This research is exploring novel ad-hoc distributed control algorithms and architectures derived from biological and geophysical systems and measurements of fabricated systems such as the World Wide Web. [16]
Thanks ffor sharing this
ReplyDelete