1.a. No, developers cannot be sure that1.a. No, developers cannot be sure that

1.a. Two such problems are:i. Privacy: Private data of users can be compromisedii. Integrity: Users’ data can be altered without permissioniii. Denial of service: One user can hinder the working or access of data of another userb. No, developers cannot be sure that their software is fool proof that is without bugs. Thus, it is impossible to be sure that we can allocate computation resources fairly and prevent all types of data sharing between unwanted parties. Moreover, any protection scheme designed can be inevitably broken over time and higher complexity of a protection scheme makes is more difficult to verify its correctness of implementation.2. Resources to managed in the given settings are as follows:a. Mainframe or minicomputer systems: network bandwidth, memory and CPU resources, storageb. Workstations connected to servers: memory and CPU resourcesc. Handheld computers: memory resources, power consumption1.14 A user fares better during three scenarios: when the system is cheaper, faster, or easier. For example:a) When the user has to pay for management and the costs for a time-sharing system is lower than for a single-user computer.b) When a simulation or computation takes a lot of time on a single-user workstation.c) When a travelling user does not have a good enough system to carry around, he/she can connect to a time-shared system remotely and perform their job.1.15 The most important point of difference between symmetric and asymmetric multiprocessing is that only the master processor in Asymmetric Multiprocessing handles the tasks in OS. However, for symmetric multiprocessing, the tasks in OS are run by all the processors.Symmetric MultiprocessingAsymmetric MultiprocessingAll processors run OS related tasksOnly the master processor handles OS related tasksMay have a private queue for each processor or a common process queue for all processorsProcesses assigned to slave processors by master processorProcessed communicate by shared memoryProcesses do not communicate, rather controlled by the master processorAdvantages: Multiprocessor systems are cheaper, by sharing peripherals and power supplies Execute programs faster. Are more reliable, since the damage to one processor is localized and not prone to one-point failure.Disadvantages: Hardware and software complexity increases exponentially. More number of CPU cycles are needed to maintain the cooperation among processors, thus, decreasing the overall per-CPU efficiency.1.16 Multiple computation systems are combined into a single system to design a clustered system, which performs a task distributed across the cluster. However, multiprocessor systemsusually consist of multiple CPUs comprised within a single physical entity. When consideringcoupling between a clustered system and a multiprocessor system, the former is less tightlycoupled. Clustered systems use messages to communicate among each other, but, on the otherhand, processors of a multiprocessor system could also communicate using shared memory.Usually two machines are used and the states of the machines should be replicated and updatedat regular intervals to provide a reliable service. If one of the machines undergoes failure, theother can then be used to takeover the functionalities of the failed machine.1.17 Consider the following two alternatives: asymmetric clustering and parallel clustering. Withasymmetric clustering, one host runs the database application with the other host simplymonitoring it. If the server fails, the monitoring host becomes the active server. This isappropriate for providing redundancy. However, the potential combined processing power ofboth the hosts together is not utilized completely. Parallel clustering allows the execution of thedatabase application in parallel on both the hosts. The major difficulty in the implementation ofparallel clusters is to provide a distributed locking mechanism for the files on the shared disk.1.18 Personal computers are more secure and are easy to fix when they break, but a networkcomputer is a terminal that communicates with other terminals via the web (TCP/IP being themost common network protocol). A networking protocol needs an interface device with adevice driver and software to handle data, so to the operating system this is rather minimal.Further, the data is usually centralized, so computers connected to the network are all able toget the same information without having to save it onto their system or send it to anothersystem.When there is going to be sharing of data or parallel computations on a large scale, anetwork computer is better to use than a personal computer.1.19 An interrupt is a hardware-generated signal that changes the flow within the system. A trapis a software-generated interrupt.An interrupt can be used to signal the completion of I/O so that the CPU does not have tospend cycles polling the device. A trap can be used to catch arithmetic errors or to call systemroutines.1.20 All devices have special hardware controllers. Normally, the OS has device drivers (which arekernel programs) that communicate with the controllers. The device drivers have registers,counters and buffers to store arguments and results. Normally, these drivers sits in a tight loopto see the I/O through. However, these would tie up the CPU during the I/O. With DMA, TheCPU first loads them, and then the device controller takes over.When the device is finished with its operation, it sends an interrupt to the CPU to indicatethe completion of the operation.The CPU and the device may happen to access memory simultaneously, both of which areprovided access to the memory bus fairly by the memory controller. During high speed, a CPUhas to compete for access to the memory bus and thus, be unable to perform memoryoperations. Besides interrupts, there are no interferences with other user programs. Aninterrupt is sent by the DMA controller when it is done, which can suspend user processes.1.21 An operating system for a machine of this type would need to remain in control (or monitormode) at all times. This could be accomplished by two methods:a. Software interpretation of all user programs (like some BASIC, Java, and LISP systems,for example). The software interpreter would provide, in software, what the hardwaredoes not provide.b. Require meant that all programs be written in high?level languages so that ll object codeis compiler?produced. The compiler would generate (either inline or by function calls)the protection checks that the hardware is missing.1.22 The different levels are based on access speed as well as size. In general, the closer thecache is to the CPU, the faster the access. However, faster caches are typically more costly.Therefore, smaller and faster caches are placed local to each CPU, and shared caches that arelarger, yet slower, are shared among several different processors.1.24 In single-processor systems, cached values are updated based on the processors, thus,requiring the memory to be updated. There are two ways to make these memory updates –immediate updates or in a lazy manner.In a multiprocessor system, same memory locations maybe stored by different processors intheir local caches. When updated, the other cached locations must be updated or invalidated bythe processor.In distributed systems, consistency among cached memory values is not an issue as long as aclient does not cache file data.