Limitations of leadership in criminal justice organizations
September 22, 2021Billabong International Brand Audit
March 8, 2023High Availability and Multi-Core Processors
nMBA-FP6182: Impact of Advances in Information Technology
nU04A6 High Availability and Multi-Core Processors
nAn Advance Technology for Computer Systems Organizations
nExecutive Summary
nThe enterprise systems increasingly require the most important element in the industry, the high availability, which is often considered more valuable than the performance. The systems, which are designed for this purpose, normally use hardware redundancy for the detection of errors and continuation of performance in the occurrence of a failure. Chip multiprocessors with enough resources such as the identical cores, caches and the interconnections of the networks will provide a suitable building block for implementation of high availability on the chips. However, it results in problems regarding containment of the error and the replacement of the faulty components. Further scaling down of technologies going to the future as silicon and transient faults increases exacerbates the problem. Therefore, an approach for a novel cost-effective architecture, which is built based on the multicore processors of the future. The approach will provide for new multiprocessor chip with the provision of the configurable isolation meant for containment of the fault and retirement of the component. It takes into consideration the modifications of the design components and relation to the cost-effectiveness. The proposed architecture will be capable of isolating the faults effectively and even graceful degradation should there be a failure in the system.
n
nEnterprise Architect Framework for the Organization
nFor a long time in computing, the traditional high performers have failed to meet the required needs. In recent past, the world has witnessed rising data volume while the workload has become progressively (Wilshire, 2014). For this reason, there is the need for an innovation in the computer performance. New inventions such as the High Availability and Multi-Core processors are intended to solve the existing gaps and offer the required breakthrough. Importantly, the invention has the capacity to function with small clusters and the largest supercomputers (Wilshire, 2014). Additionally, they offer the balance and scalability for the data-intensive applications, visualization, the artificial and computer intelligence. It is designed to focus on moving everything closer to the processor aiming to enhance the bandwidth, permit more time for processing and waiting for reduction as well as latency reduction. The processors will feature a variety of special technologies that improve overall performance and parallel throughput, while at the same time minimizing energy consumption (Wilshire, 2014).
nHigh Availability and Multi-Core Processors
nThe rapid and dynamic nature of processing of information has brought about the computer-based systems all around our daily lives. Going to the future, improvements, and solutions to be developed on computer systems will provide support in various aspects regarding human lives. Power and other issues resulting from heat have been considered as certainly the most important elements of current and future processors (Kalpakjian & Schmid, 2014). The semiconductor industry still provides transistors per chip. Therefore, the concern keeps on. The organizations in this sector are already shifting to enable architectures to accommodate the multi-cores, multi-threads and the last level catches. It will ensure that processors are clocked at low frequencies, lower power, and energy consumption while at the same time improved and better performances are maintained (Hoffman, 2014). The future multi-core and the processors need more and improved novel architectural designs. As a result, power consumption and temperatures in the cores could be controlled.
nThe alignment of the technology to the business goals
nThe high performing computers are crucial to various spectrums of businesses such as the academia, sciences, and the industries. The overall business activities such as commercial services, customer relations, and the information technology have a very close relationship. It makes high availability an important feature in computer systems organizations. It is trending as the major need within organizations systems and even often considered more important than the performance (Kalpakjian & Schmid, 2014). The main reason is that many types of research have shown the high costs due to the server outage and increased downtime costs.
nThe Framework and the Maturity Models
nThe IBM and HP are the organizations currently providing the high availability systems. The systems can provide for cover hardware faults given the combined redundant processors and codes, which can correct errors in the memories (Gaillardon et al., 2015). Through redundant hardware, faults present can be detected, hence lowering costly errors. It also ensures that the application keeps on with the executions without downtime until the problem is managed. The current systems approach varies depending on the organization.
nThe current use of chip multiprocessors; a combination of caches, multiple processors, network interconnections makes it suitable for high availability. Considering this as a basic unit for coming up with high availability systems the benefit can be achieved in that the system solution is on one chip (Gaillardon et al., 2015). For instance, a four-cluster high availability processor is configured to an eight chip multiprocessors. There are challenges however to these systems, mainly on reconfiguration on core level and containment concerning the errors. The challenge, for instance, various cores that are sharing a memory controller all in a single chip may bring about failure in the system, errors which are hard to detect among other related issues (Kalpakjian & Schmid, 2014). Furthermore, detecting an error is identified the cores will not be capable of carrying out meaningful computations, therefore, need for the replacement of the entire multi-core system regardless of functioning or failure system components.
nThe advancement of Proposed System, risk management and its monetary effects
nThe proposed new multi-processor chip design will enable separation of fault items at a lower level. It will also facilitate for modifying of the architecture of the commodity at a relatively lower cost as well as its reconfiguration (Wilshire, 2014). The new design of the hardware ensures that less support is provided to the hardware systems. It is attained by the many fault areas through the repartitioning of the chip multiprocessors. Fault zones are to facilitate the detection of faults and ensure that the system can reuse the available resources in case one of the components on the chip is affected by the fault. It occurs at the system level (Hoffman, 2014). The new system will also be able to enable for optimization that will facilitate for dynamic power re-assignment to the active components if one of the multiprocessors components fails. The power reallocation ensures that the remaining cores are up-scaled hence mitigating of the performance despite the failure of the components (Kalpakjian & Schmid, 2014). The various system simulations on the models from system vendors on high availability prove that the proposed system is superior to other alternative approaches with even better performance.
nThere are some continuing changes and trends in the field of technology, the common one being scaling and much improvements on the transistors densities (Hoffman, 2014). It has led to the development of chip multiprocessors with an increased number of cores. However, the reliability, on the other hand, has been reduced. There is need to bring the two to balance. Accommodating more number of cores in the multiprocessors will provide for greater and better computational capacity and the capabilities (Hoffman, 2014). While on the chip level, it will facilitate the integration. As a result, cost, as well as the performance benefit, will be meeting through improved sharing of the resources and the minimized number of component.
nThe scaling down in technology has brought about some problems, especially concerning the hardware. There has been increased number of errors in the hardware as the scaling down increases. These faults in the hardware are because of silicon wear out with time and the defects of the silicon itself, which has resulted in hard errors (Kalpakjian & Schmid, 2014). The hard errors, in turn, results in changes in the random bits caused by the electronic noise or radiation from external leading to soft errors. It is this rising likelihood of soft errors that the multi-bit is the potential now. The current transient faults that are responsible for coursing logic errors cannot be left out without being addressed considering the double logic state bits on every generation. Therefore, better and improved as well as sophisticated approaches are in need to address the issue of soft errors (Hoffman, 2014). It will include the redundancy execution for logic verification at all levels, lower end systems included.
nThe chip multiprocessors with multiple cores will provide the basic requirements for coming up with systems with the much-needed high availability. The systems can mitigate the reduced transistor effects on the reliability level. However, to meet this objective, some challenges and obstacles will have to be addressed first. Engineering and the architectural elements will have to be met. The most important being the development of high availability chip multiprocessors system based architectures, which are capable of dealing with future faults in technology. The developed architectures should be able to detect the faults and the isolation of the faults at a relatively lower granularity, through this entire chip will not be paralyzed by a single error or fault. Developing the system architecture with high availability will not be enough given there is an engineering issue to be addressed by which it is practical. In the few years to come, few systems will be in a position to embrace high availability DMR solutions (Galegher, Kraut & Egido, 2014). Therefore, enhancing the high availability architectures concerning multiprocessors must not be less expensive and nonintrusive compared to other multiprocessors systems.
nThe basic building blocks for the system in the past were memory cache and controllers as well as the identical processors. The ones designing the system were able to develop proper fault isolation through the combination of the above building blocks at the chip level, configured to redundant at the board. In addition, where it was deemed necessary glue logic was incorporated in smaller amounts (Kalpakjian & Schmid, 2014). Then fault isolation in the systems to a single chip was considered adequate. The architecture of the NonStop Advance is the perfect example whose implementation and processing pairs as well as faults containment are at the socket level.
n However, considering the developments on the chip multiprocessing based approaches, this means that the isolation at the socket level is less valuable and is becoming unattractive (Galegher et al., 2014). Designing chip multiprocessing to use on-chip mechanisms to facilitate for either high availability, conventional or the no redundant systems with suitable and appropriate characteristics for fault isolation is the main challenge. There is the need to focus on techniques that will facilitate the shelf configuration of the chip multiprocessors, with possible few added on-chip hardware and the complexities of high availability systems and redundant as well (Hoffman, 2014).
nFor cost favorable high availability systems, the architecture needed should enable isolation of faults effects to smaller units. The convectional architecture is one of the alternatives where the individual computers are fabricated on the same die (Gaillardon et al., 2015). Every computer is having memory controller of its own as well as the connections. However, some challenges such as cache resources partition, which hinders any sharing. In this regard, minimizing the overall performance of the system and has not been considered in the proposed design of chip multiprocessors (Hoffman, 2014). In the same case partitioning of chips and pins, interfaces will lead to less utilization of the off-chip bandwidth. Such insufficiencies in performance make this design less attractive especially for applications with high volumes where performance is the key goal other than high availability (Gaillardon et al., 2015). The system, however, provides full isolation.
nIn this case, therefore, there is need to exploit on the tradeoffs on the full isolation compare to the full sharing. The proposed and one, which seems suitable system being the nonintrusive improvements to the commodity chip multiprocessing architectures (Gaillardon et al., 2015). It can provide isolation, which is configurable with various levels of availability.
nHandling of errors varies considerably depending on the propagation of the processor core errors far outside. The primary focus in this case of advancement is narrowed down to memory containment level. At this level, the memory and the processors are grouped to a fault zone. As a result, detection and the recovery are only limited to the operations that are causing device access (Spector et al., 2014). The only closest to this is the HP NonStop approach. However, its containment is at the socket level while the approach is being considered, the effects of containment at memory level in the multiprocessor chip. Approaches to contain faults at the memory level are less expensive given the virtue that tolerance to faults is performed at a coarser level. It is also based on multi-core processors commodity accompanied with added support on the chip to provide for containment of faults as well as higher levels of availability (Hoffman, 2014).
nSuch systems are capable of protecting the cases of hard errors. The discussion and analysis have only major with the assumptions that configured isolation are used in line with redundant processors for soft error detection. Once there is a failure there are several configuration ways available, however, in this approach, the configuration process will involve an odd number of cores (Spector et al., 2014). Meaning there is an extra core, which is unused in ensuring that the cores are many as the processes configured in the DMR. In this regard, it has the possibility of affecting the performance degradation slop.
nThere could be some other environmental systems. For instance, the web servers firms that provide the cover against the instances of a soft error occurring; the firms achieve this through using redundancy processes. However, this will prove more costly compared to the alternative approach of just rebooting the system should an error or halts be detected (Galegher et al., 2014). In this scenario, it guarantees the isolation of faults through the redundancy processes. The main advantage attained is the fact that it is possible to isolate a fault to a given component, this will ensure that the system is available for use continuously.
nThe architecture proposed enables fault isolation and the reuse of the resources within the system. This is so through the partitioning of the cores, resulting in performance degradation (Hoffman, 2014). Therefore, there is need to address the issue of performance. One of which is the power budget re-provision through power allotment re-assignment to the available components that are free fault upon the occurrence of the fault in the system.
nPower consumption is mainly at the cores. The most impact of the degradation of performance is also experienced at the cores. Therefore, it focuses on the cores. In this approach, the assumptions are that in case one core fails its power budget will dynamically be re-allocated to the remaining active cores to raise their clock frequencies. In this process, the issues to be addressed are the voltage increase to facilitate for higher frequencies (Klir, 2013). The other issue being the ability of the processor-packing thermal should be able to hold the extra-localized generation of heat. The two challenges can be addressed by the designs, which are judicious.
nThe Recommendations and the Future
nThere are benefits to the proposed architecture. The fault isolation will help protect the system from the transient as well as the permanent faults, while at the same time getting the better from the benefit of sharing (Spector et al., 2014). The reconfiguration of the proposed system will enable the needed degradation even in the case of identified chip multiprocessor components failing. Moreover, this will suitably address the issue of permanent faults (Klir, 2013). Other optimizations resulting from reconfiguration is the novel approach, which enables the dynamic power re-provision, which enables further faults degradation.
nThe approach opens up for further policies, which are of interest in the design space. For the reconfiguration, purposes may be the introduction of colors for fault distribution, the assignment of partitioning meant for the workload purpose and the re-provision of the power to the other codes if one core is faulted (Spector et al., 2014). In this approach, there are implications, which are beyond faults, the performance and the security on isolation. The architecture in this approach can facilitate for the tradeoffs between the performance and the availability of which it is a very important characteristic in the future computing environment (Klir, 2013).
nGoing to the future not all of the systems, which are general, will need such high level of protection from the faults (Hoffman, 2014). As a result, the architecture to be designed should enable the needed configuration and the evaluation. Additionally, this is because the proposed architecture will be used in a high availability configuration. The detection of faults employing the given techniques is effectively very high even if it leads to high overhead (Klir, 2013). Moreover, the overhead once at 100% remains constant despite the fault rate increasing for the coming generations. Therefore, this solution is viable for continued scaling of the technology.
nAs the fault rate, keeps on increasing and more going to the future, the configurable isolation approaches, which only require small as well as nonintrusive alteration to their commodity structures to facilitate for high availability (Hoffman, 2014). Such system approaches are more likely to form core and integral part of systems in the future.
nThe COMBIT Business Framework
nThe current trends in the modern world are characterized by the emergence of new technologies such as the social media, cloud computing, big data and the mobility. The information and IT play a critical part in the daily occurrences across the globe. Technology facilitates easier support and management of massive volumes of information. Therefore, the success rate of business is improved at the same time (Spector et al., 2014). Nevertheless, the complexity and challenges of management and security of the professionals, specialists in governance, and enterprise leaders arise.
nThe emerging challenges in business can be solved through the power of information. In this respect, the COBIT business framework is required to assist in risk mitigation. The type of the COBIT allows for the latest opinions on the governance of the information technology. In addition, it offers analytical tools and principles as well as other modes, which raise value and trust derived from the information systems (Gaillardon et al., 2015). It is beneficial because it facilitates maintenance of high-quality information to reinforce business decisions. Moreover, successful use of IT in business contributes to realization of business goals. Significantly, the utilization of technology is critical because it enhances effective IT management, increases chances of excellence and enables the firms to realize the value of its investment in IT (Hoffman, 2014). Finally, offer an opportunity to archive compliance with the legal provisions.
nConclusion
nBy integrating more number of cores and other components at the chip multiprocessor, the computational capacity and the function of the system are improved, which in turn enables performance and the cost benefits. However, there are challenges that come along but require the containment of the fault and the replacement of the faulty elements to be done at chip multiprocessor level. Therefore, it leads to increased costs and downtime. The expected increase in some faults for the future technologies makes the challenge be exacerbated. The architecture of the chip multiprocessor novel approach is most appropriate since it is designed to facilitate for configurable isolation to the shared components (Gaillardon et al., 2015). It will address the conflicts due to the integration of the system and sharing of the resources and the needed isolation of the faults and degradation. The intelligent reconfiguration support archives this by allowing reallocation of processor resources and the dynamic partitioning, however, with the lower overhead area.
n
nReferences
nGaillardon, P. E., Beigne, E., Lesecq, S., & Micheli, G. D. (2015). A survey on low-power techniques with emerging technologies: From devices to systems. ACM Journal on Emerging Technologies in Computing Systems (JETC), 12(2), 12.
nGalegher, J., Kraut, R. E., & Egido, C. (2014). Intellectual teamwork: Social and technological foundations of cooperative work. Psychology Press.
nHoffman, R. R. (2014). The psychology of expertise: Cognitive research and empirical AI. Psychology Press.
nKalpakjian, S., & Schmid, S. R. (2014). Manufacturing engineering and technology (p. 913). Upper Saddle River, NJ, USA: Pearson.
nKlir, G. (2013). Architecture of systems problem solving. Springer Science & Business Media.
nSpector, J. M., Merrill, M. D., Elen, J., & Bishop, M. J. (Eds.). (2014). Handbook of research on educational communications and technology (pp. 439-451). New York, NY: Springer.
nWilshire, J. C. (2014). U.S. Patent No. 8,635,412. Washington, DC: U.S. Patent and Trademark Office.