In lockstate. For unlocking, state is setIn lockstate. For unlocking, state is set

In locking, Naive spinlock algorithm uses test and set atomic operationin which memory location is an argument and this location is set to 1 and oldvalue(flag) is returned. If value of the memory location is 0 then thread setsto 1, if it is already 1 then thread will spin. In unlocking, just value(flag) ofthe memory location is set to 0. There are three problems here. As a first,there exists contention which means all threads get the value(flag) from thememory. Also cache is not used. As a third, waiting threads remain idlewhich means do not have any execution while waiting lock acquirement.Ticket lock algorithm aims at assigning locks to threads in a fair waywhich can not be provided by Naive spinlock algorithm. It creates datastructure for each lock in the system which is composed of next ticket andnow serving attributes. In locking, fetch and inc atomic operation is usedin which memory location again is taken as an argument and value of it isincremented and old value is returned. Since delaying causes less contention,pausing is applied in this algorithm, any thread in a waiting queue waits fora certain time. In unlocking, now serving attribute of the data structure isincremented which provides fairness.In Anderson array based queueing algorithm, there is an array basedqueue structure and two states possible for threads in this queue: has lockand must wait. If a thread has lock and executes its operation it’s state ishas lock and if it does not have a lock and has to wait for read it then it’sstate is must wait. There is another variable is queuelast which is the next1requester. In locking phase, queuelast position is incremented then threadis waiting until the value of the it’s array position becomes for the has lockstate. For unlocking, state is set to the must wait and next thread in thequeue becomes has lock state.Anderson algorithm has a drawback of space overhead due to array basedstructure. MCS list based queueing algorithm aims at solving this drawbackwith linked list queueing mechanism. Here, for each lock, there exists a nodein linked list. In locking, fetch and store atomic operation is used in whichthread is resided into end of the list and setting next to null. Then, it isawaiting predecessor to signal. In unlocking, current running is removedfrom the list and successor is signalled to acquire the lock.Mellor et.al 7 shows performance results of these algorithms on two dif-ferent architectures: Butterfly and Symmetry. Figure 4 in 7 shows theperformance of algorithms based on latency in Butterfly. Figure 6 showsthe performance of algorithms based on latency in Symmetry. As it can beseen from the graphs, Naive spinlock algorithm gives the worst latency per-formance due to its several disadvantages which are explained in the above.For Butterfly, with a less number of processor the latency performance ofthe Ticket, Anderson, and MCS list based algorithms perform equally but asnumber of processor increases MCS and Ticket perform better performancethan Anderson since their implementation is more dynamic than Andersonin terms lock acquirement for threads. For Symmetry architecture, Naive,Anderson and MCS are compared with each other. As in Butterfly, NaiveSpinlock gives the worst latency performance and MCS performs better thanAnderson.Q2-)Scheduling is the task of assigning the processor to execute the threads.First Come First Serve(FCFS) scheduling policy assigns the thread thatis come as a first to the processor. FCFS ignores affinity for fairness, itjust looks at early arrived thread without taking into consideration whetherthread’s memory contents are in the CPU or not which is called cache affinity.Fixed processor scheduling policy assigns the thread to always the sameprocessor. The idea is if a thread is always executed in the same processorits data will be on the processor’s cache.Last processor scheduling policy executes a thread in the processor inwhich it is lastly executed in previous scheduling phase. This also considersthe cache affinity like fixed processor scheduling policy.2Minimum Intervening scheduling policy assigns a thread to the processorwhich has minimum number of intervening threads to be scheduled.There are three metrics indicated in Squiillante et.al.9 for comparingperformance of these scheduling policies: throughput, response time, andvariance. Throughput is the number of threads that executed in unit time.Response time is time for thread execution. Variance is the difference be-tween executions of a thread depends on the load in the systems.As memory footprint of thread gets larger, caching reload time increases.As it is shown in Figure 5 and Figure 6 in 9, if system has heavy loadFixed Processor policy gives the best throughput. Because, there are lotsof threads in the system and they pollute the cache. So, by assigning eachthread to a fixed processor in a heavy load system makes no load imbalancewhich provides the best throughput. On the other hand, for small cachefootprint reload time in a normal load, FCFS, Last Processor policy, andMinimum Intervening policy performs nearly equally in terms of throughputand they both outperform Fixed Processor policy due to the load imbalanceproblems of it. If there is no heavy load in the system, Minimum Interveningpolicy gives the best performance in terms of throughput due to its dynamicthread assignments to processors based on theirs thread load. Figure 7 in9 shows the performance results in terms of the response time for a heavyload system. As in the throughput case, Fixed Processor policy gives the bestresponse time and the FCFS gives the worst. Last Processor policy gives bet-ter response time than the Minimum Intervening policy as the reverse of thethroughput case. This is why Minimum Intervening policy has the variancein task response times that is relatively larger than the Last Processor policy.Q3-)SPIN 3 and Exokernel 4 are two examples for extensible kernels.In SPIN approach, kernel and any other OS services shares the samehardware address space. This property prevents border crossing betweenany component and enables extensibility but yields to have protection con-siderations. For safety, they used strongly typed programming language forimplementation in which compiler can enforce the modularity that providesprotection. It also provides logical protection domains. So, the dependenceon hardware address space to provide protection between different OS ser-vices and the kernel vanishes. In other words, kernel provides interfaces as alogical protection domains which provides safety and also enables flexibilitybecause applications can dynamically bind different implementations of the3same OS service.In Exokernel approach, kernel itself provides hardware resources explic-itly for the library OSs. When one of the library OSs asks for a resource,Exokernel provides the hardware that was requested by the library OS bycreating a secure binding between hardware and the library OS. It createsan encrypted key for the hardware resource, and gives it to the requestinglibrary OS. With these secure binding with a encrypted key approach, Exok-ernel provides protection. In Exokernel, OS code belonging to each libraryOS is downloaded into the kernel. Such code has ability to become an ex-tension of the Exokernel for particular library OS. Furthermore, Exokernelmaintains PE data structure for each of the library OS which contains entrypoints for different event types that are exceptions, external interrupts, sys-tem calls and addressing context for page fault service. These two propertiesprovides flexibility in Exokernel.Q4-)Remote interface 11 is available as mechanism in the distributed objectmodel. Suppose that we have a bank account server for deposit, withdrawand balance APIs. In order to construct those services as a distributed objectaccessible from the clients anywhere in the network, there exists two waysand remote interface has important role in both approaches. Suppose thatdeveloper locally implement a class for a API and then extend it to implementthe bank account class. This extention has to be visible so that any clientshould access to it. In such situation, remote interface is used which is thebuilt-in class available in the distributed object. Remote interface providesmakes the methods in the bank account class that visible(for all clients) onthe network. By using remote interface, the methods that are in the bankaccount object are published.On the server side of Java RMI, three steps are required to make theserver object visible on the network. As a first, object is instantiated. Then,URL is created which is desired to call. Finally, at Java run time, URL isbind with the instance of the object that is created in the first step.Q5-)Sources of the RPC 2 overhead is copying operations. There are fourdifferent copying operations in a single RPC between a client and a server.These copying operations are listed in the below:4• Kernel copies the parameters of the call message into its buffer fromthe client address space.• Kernel copies parameters of the call message that are already stored inits buffer into the address space of the server.• Kernel copies parameters of the result message from the server addressspace into its buffer.• Kernel copies parameters of the result message that are already storedin its buffer into the address space of the client.Q6-)Xen 1 and VMWare ESX 10 are virtualization platforms. In Xen,paravirtualization approach is used where guest OS code is modified whichenables directly sending of instructions to the hardware. In contrast to Xen,there is no need for modification of guest OS in VMWare ESX since it usesbinary translation. This difference makes VMWare easier to install and man-age when it is compared with the Xen. If there is a need for running morethan one virtual machines on the same machine then, VMWare ESX shouldbe preferred because its architecture supports that property when it is com-pared to XEN. If a system owns Intel or AMD capable archictecture eitherof the platforms can be preferred, otherwise VMWare ESX is the only choicesince in Xen, there is no support for architectures except Intel or AMD.Q7-)Recovery techniques are developed for handling system crashes. Thesecrashes can be caused by software errors and power failures.In RVM 8, transaction-based system is provided for persistent datastructures. It makes virtual memory persistent. To do this, it uses per-sistent logs to record changes to virtual memory. Some virtual address spaceis specified and some part of data is selected and they are tried to be pro-tected. Here, protection is ability to obtain data even if any of the systemcrashes occurs. In RVM, persistent log is kept which is called external datasegment in order to store only changes in the selected region of data in virtualmemory. And this log is written to the disk periodically not as always.RioVista RVM 6 is a performance-aware version of the RVM. The mainof the RVM is improving the transaction overhead and eliminating disk IO5which is the drawback of the RVM system. In RioVista RVM, a power sup-ply is connected to the persistent memory and with this adaptation powerfailures are not no longer be problematic. Furthermore, with this modifi-cation the need for disk vanishes since the pair of persistent memory andpower supply behaves like a disk. Another difference from the RVM is thefile cache concept. Rio file cache provides persistent memory as a persistentdisk storage. Disk is not used in RioVista RVM.Q8-)In GMS 5, cache refers to physical memory. At each node, physicalmemory is split into two parts which are local and global. Local part in-cludes working set of currently executing processes at corresponding node.Global part is similar to disk which provides community services. As a pagereplacement algorithm, globally least recently used approach is used.Suppose that we have two nodes in the cluster: NodeP and NodeQ.There are totally four global memory hit and global memory miss casesdue to the result of page faults:• Case1(Global Memory Hit Case): Process is executed on NodePand page fault happens on a page. That page is in the global cacheof NodeQ. Then, NodeQ sends the page to NodeP and the page isadded to the local part of the cache of NodeP. Since memory in localallocation increases by one, NodeP sends any page in its global cacheto the global cache of the NodeQ.• Case2(Global Memory Hit Case): Process is executed on NodeQand page fault happens on a page. All of the physical memory availableat NodeP is consumed by local cache(working set). Then, the pagewhich is least recently used is selected in local cache of NodeP andreplaced by the faulted page on NodeQ.• Case3(Global Memory Miss Case): Faulted page is on the disk.Faulted page is fetched into the local cache of NodeP. To shrink theglobal cache of NodeP by one, any page is selected from the globalcache and is sent to the node which has globally oldest page in entirenetwork. Since there are only two nodes in my example, this nodeis NodeQ. Here, there are two possibilities. The globally oldest pageeither be in the local cache of NodeQ or global cache of NodeQ. In bothcases, it is dumped into the disk.6• Case4(Global Memory Miss Case): Until now, faulted page isprivate to a process. In this case, page is actively shared. Supposethat we have three nodes in the cluster: NodeP, NodeQ, and NodeR.Let’s NodeQ and NodeR share a page and a process in NodeQ uses thepage which is in the local cache. A process in NodeP encounters pagefault in that page. In such situation, as a first the page is copied fromlocal cache of NodeQ to local cache of NodeP. As in Case3, to shrinkthe global cache of NodeP by one, any page is selected from the globalcache and is sent to the node which has globally oldest page in entirenetwork. Let’s say it is NodeR. The globally oldest page either be inthe local cache of NodeR or global cache of NodeR. In both cases, it isdumped into the disk.