Computeruser.com
Latest News

Speedy Action to adapt to Change

Swift action needs to be taken to adapt to changes continuously. With the globalization, day by day new developments take place in software technology. Delay can certainly be dangerous. So, the need of the hour is not only to develop software but also protect the same from external and internal threats.

Successful software systems often suffer the same fate that befell prosperous ancient cities. Developers design software systems based on needs and constraints imposed by external factors that change over time. These systems might align well with the current mission, marketplace, or line of business, drawing a strong client base. But as the number of users grows, so do demands on the system.

These demands can imperil the system's performance by exceeding the intended resource level. Demands may be to develop capabilities not supported by the original system design.

In these cases, the pressure is to change the software in response to evolving requirements. Ultimately, developers address these demands and change the software within unrealistic resource constraints. Like building outside the city walls, this means potentially compromising resources and structures. With repeated changes, the software becomes less changeable—making the system brittle. Thus, the software system's success could contribute to its demise.

While software engineers face a situation similar to that of their ancient city-building ancestors, technologies are available to help them develop and evolve systems that respond quickly and efficiently to a changing environment.

Software complexity is the degree to which software is difficult to analyze, understand, or explain. Figure 1 illustrates a trend that has persisted since the mid-1970s: As society increasingly depends on software, the size and complexity of software systems continues to grow—making them progressively more difficult to understand and evolve.

This trend has dramatically accelerated in recent years with the advent of Web services, agent-based systems, autonomic and self-healing systems, reconfigurable computing, and other advances. Software's complexity has compounded in both volume (structure) and interaction (social) as the Internet has enabled delivering software functionality as services.

Yet, most technologies that we use to develop, maintain, and evolve software systems do not adequately cope with complexity and change.

Traditionally, software engineers respond to complexity by decomposing systems into manageable parts to accommodate the sheer number of elements and their structure. However, the Internet and the emergence of software as services have led to a new kind of complexity.

What José Luiz Fiadeiro describes as software's social complexity naturally arises from an increase in both the number and intricacy of system interactions ("Designing for Software's Social Complexity," Computer, Jan. 2007, pp. 34-39). Services are inherently social, and interactions stem from a range of dependencies and values.

Service-oriented architectures accordingly reflect the need for flexibility and self-assembly more than size and structure.

Web services:Consider, for example, a global firm that produces a monthly publication—originally in English—that it must translate into various languages for its customers worldwide. The firm employs a Web service broker to acquire and assemble the services necessary to translate the manuscript and distribute the electronic copies to the offices in each country.

Assuming there are competing suppliers for the translation services and acceptable translations of the documents are feasible, the supplier then submits the job to the service broker, who in turn identifies the appropriate provider based on the customerprofile and costs of the service. The customer agrees to the provision, and the provider translates and sends the publication.

At this point, if developers could construct the software service components for this specific provision, software impact analysis and visualization tools could readily provide the traditional structure and dependency depictions necessary to understand the system. However, the missing aspects are the business interaction dependencies necessary to resolve problems that might arise.

Continuing with this example, suppose the French translation is error-ridden and must be corrected. Who fixes the problem? Not the customer—the opportunity to operate the service has passed with the initial use. Not the maintainer—since the code and even the executable image is inaccessible for local remedy. While in the same situation, the broker can at least negotiate a solution, but will it be done in time?

Because the process does not adequately capture the various interactions and their intricacies, problems arise around responsibility. Under- standing the dependencies and relationships between the provisioned services and providers is daunting. With the broker often unclear on knowledge boundaries, discovering the problem using normal impact analysis can be out of reach.

This example shows, in a simple way, how the state of the practice for software impact analysis technology fails to address this new social complexity.

Autonomic and self-healing systems

Similar examples can be given for autonomic and self-healing systems. Understanding the number and intricacies of their interactions likewise provides insight into appropriate responses in a changing environment.

For self-healing systems, anomaly detection, diagnosis, replacement planning, and execution timing are all functions of components' interactions in the operating environment as illustrated in David Garlan and colleagues' work with the Rainbow framework ("Rainbow: Architecture-Based Self-Adaptation with Reusable Infrastructure," Computer, Oct. 2004, pp. 46-54).

With these emergent technologies, software engineers must cope with ever-increasing interactions and dependencies.

CHANGE-TOLERANCE SUPPORT

During the 1990s, the software industry shifted from a custom-solution paradigm to mass customization in packaged applications, and it is now transitioning to a service-oriented paradigm. However, this is not to say that custom solutions or mass customization have gone away.

Given that computing hardware is viewed as a commodity and the Internet makes delivery trivial, the economic weight now falls on using modular components assembled into evolving solutions—services composed based on canonical components from competing sources. Economically viable software components can be standardized and reused on many levels of scale.

A key aspect of software is its capacity or tolerance for change. Inspired by aspects of fault tolerance, change tolerance connotes software's ability to evolve within the bounds of its original design—the degree to which software change is intentional.

A maintenance view of corrective, adaptive, and perfective change is one type of software change. However, this type of change doesn't really manage the variant and invariant nature found in Bertrand Meyer's open/closed principle—open for extension, closed for modification (Object-Oriented Software Construction, Prentice Hall, 1988). Designing for change at the product level such as reconfigurable computing or at the process level such as model reuse are other types of software change.

Industry approaches software change using top-down model-based methods such as the Object Management Group's model-driven architecture and bottom-up agile methods such as extreme programming. Both address the risks of producing large volumes of software on shorter timelines, but from different perspectives.

Through a series of elaborations and refinements, model-based approaches systematically move from abstract computationally independent models, to platform-independent models, to concrete platform-specific models—organizing knowledge and leveraging reuse at appropriate levels. The complexities include interactions, mappings, and transforms in the populated model repositories that evolve over time.

In contrast, through a series of short, well-orchestrated releases, agile approaches employ proven techniques such as test-driven development, refactoring, and pair programming to reduce risk and deliver value—changing software in manageable increments and leveraging the strengths of people working together.

Model-based and agile approaches are proving to be effective ways to develop and evolve software systems. Although both of these methods require considerable visibility into a product's complex nature to get it right, neither method specifically addresses the number and intricacy of interactions.

ANALYZING AND VISUALIZING SOFTWARE IMPACTS

The complexity of today's software systems often exceeds human comprehension. Automated support for analyzing and visualizing software impacts and navigating software artifacts is no longer a luxury. Understanding software impacts makes it easier to design, implement, and change software: Tradeoffs become clear, ripple effects become more certain, and estimates become more accurate.

Software-change impact analysis (SCIA) has largely been associated with software maintenance. Yet, software changes occur from the first day of development. The more artifacts that are produced, the more complexity becomes an issue, and the more engineers need instruments to see and understand what they are doing.

SCIA has evolved from the source-code-centric analyses demonstrated with the Y2K and Euro currency conversion efforts a decade ago. Since then, it has continued to incorporate more software artifacts and semantically rich representations.

Employing information retrieval and search technologies has revealed new ways of identifying and reasoning about impacts through traceability relationships. Using change histories to show temporally related modifications from the past offers insight into potential change-tolerant design strategies for the future.

Perhaps the most significant advance in impact analysis is the use of software visualization technologies to illuminate patterns in software artifacts. Visualization reduces the perceived complexities of software and thereby helps engineers better analyze, understand, or explain aspects of software systems.

Whether using it to navigate the myriad mappings and transforms in model-driven architecture, to clarify a design refactoring in extreme programming, or discern the impacts of a maintenance change, the combination of SCIA and visualization provides an essential software technology.

Developers are currently using this rule set experimentally at JPL to write mission-critical software, with encouraging results. After overcoming a healthy initial reluctance to live within such strict confines, developers often find that compliance with the rules does tend to benefit code safety. The rules lessen the burden on developers and testers to use other means to establish key properties of code such as termination or boundedness and safe use of memory and stack.

If these rules seem draconian at first, bear in mind that they are meant to make it possible to check safety-critical code where human lives can very literally depend on its correctness. The rules are like the seat belts in a car: Initially, using them is perhaps a little uncomfortable, but after a while, it becomes second nature, and not using them is unimaginable.

Leave a comment

seks shop - izolasyon
basic theory test book basic theory test