01. What is the need for VCS?

 Character of the business partners
 Capacity of the business partners
 Innovative idea
 Communal benefit
 Long-term sustainability
 Financial outlook


02. Differentiate the three models of VCSs, stating their pros and cons

 Local Data Model: This is the simplest variations of version control, and it
requires that all developers have access to the same file system.

 Client-Server Model: Using this model, developers use a single shared
repository of files. It does require that all developers have access to the
repository via the internet of a local network. This is the model used by
Subversion (SVN).

 Distributed Model: In this model, each developer works directly with their
own local repository, and changes are shared between repositories as a separate
step. This is the model used by Git, an open source software used by many of
the largest software development projects.




03. Git and GitHub, are they same or different? Discuss with facts.

 Git is a distributed peer-peer version control system. Each node in the network
is a peer, storing entire repositories which can also act as a multi-node
distributed back-ups. There is no specific concept of a central server although
nodes can be head-less or 'bare', taking on a role similar to the central server in
centralised version control systems.

 Github provides access control and several collaboration features such as wikis,
task management, and bug tracking and feature requests for every project.
You do not need GitHub to use Git.

GitHub (and any other local, remote or hosted system) can all be peers in the
same distributed versioned repositories within a single project.
Github allows you to:

 Share your repositories with others.
 Access other user's repositories.
 Store remote copies of your repositories (github servers) as backup of your
local copies.


04. Compare and contrast the Git commands, commit and push
        
      Basically git commit "records changes to the repository" while git push "updates remote refs along with associated objects". So the first one is used in connection with your local repository, while the latter one is used to interact with a remote repository.



Once you make a commit, Git remembers the state of all the files and folders in the repository and so even if you make some changes later, you can always come back to that particular state of the repository.
Pushing comes after you have created some commits. You push your commit usually to a remote repository. So, when you push the information about the commits you’ve created will also be present on the remote repository so other people can see the changes you have made.
So, When you commit some changes, they are present only locally. Your colleagues, friends won’t know about this changes. When you push to a repository they can see that you’ve made changes and they can then pull those changes.

05. Discuss the use of staging area and Git directory

 The Git directory is where Git stores the metadata and object database for your
project. This is the most important part of Git, and it is what is copied when
you clone a repository from another computer.

 The working directory is a single checkout of one version of the project. These
files are pulled out of the compressed database in the Git directory and placed
on disk for you to use or modify.

 The staging area is a simple file, generally contained in your Git directory, that
stores information about what will go into your next commit. It’s sometimes
referred to as the index, but it’s becoming standard to refer to it as the staging
area.
 The basic Git workflow goes something like this:
1. You modify files in your working directory.
2. You stage the files, adding snapshots of them to your staging area.
3. You do a commit, which takes the files as they are in the staging area and stores
that snapshot permanently to your Git directory.

 If a particular version of a file is in the Git directory, it’s considered committed.
If it’s modified but has been added to the staging area, it is staged. And if it was
changed since it was checked out but has not been staged, it is modified. In
Chapter 2, you’ll learn more about these states and how you can either take
advantage of them or skip the staged part entirely.

6. Explain the collaboration workflow of Git, with example

A Git Workflow is a recipe or recommendation for how to use Git to accomplish work in a consistent and productive manner. Git workflows encourage users to leverage Git effectively and consistently. Git offers a lot of flexibility in how users manage changes. Given Git's focus on flexibility, there is no standardized process on how to interact with Git. When working with a team on a Git managed project, it’s important to make sure the team is all in agreement on how the flow of changes will be applied. To ensure the team is on the same page, an agreed upon Git workflow should be developed or selected. There are several publicized Git workflows that may be a good fit for your team. Here, we’ll be discussing some of these workflow options.
The array of possible workflows can make it hard to know where to begin when implementing Git in the workplace. This page provides a starting point by surveying the most common Git workflows for software teams.
As you read through, remember that these workflows are designed to be guidelines rather than concrete rules. We want to show you what’s possible, so you can mix and match aspects from different workflows to suit your individual needs.

What is a successful Git workflow?

When evaluating a workflow for your team, it's most important that you consider your team’s culture. You want the workflow to enhance the effectiveness of your team and not be a burden that limits productivity. Some things to consider when evaluating a Git workflow are:
  • Does this workflow scale with team size?
  • Is it easy to undo mistakes and errors with this workflow?
  • Does this workflow impose any new unnecessary cognitive overhead to the team?
Example :-

Let’s take a general example at how a typical small team would collaborate using this workflow. We’ll see how two developers, John and Mary, can work on separate features and share their contributions via a centralized repository.

John works on his feature

Git Workflows: Edit Stage Commit Feature Process
In his local repository, John can develop features using the standard Git commit process: edit, stage, and commit.
Remember that since these commands create local commits, John can repeat this process as many times as he wants without worrying about what’s going on in the central repository.

Mary works on her feature

Git Workflows: Edit Stage Commit Feature
Meanwhile, Mary is working on her own feature in her own local repository using the same edit/stage/commit process. Like John, she doesn’t care what’s going on in the central repository, and she really doesn’t care what John is doing in his local repository, since all local repositories are private.

John publishes his feature

Git Workflows: Publish Feature
Once John finishes his feature, he should publish his local commits to the central repository so other team members can access it. He can do this with the git push command, like so:
git push origin master
Remember that origin is the remote connection to the central repository that Git created when John cloned it. The masterargument tells Git to try to make the origin’s master branch look like his local master branch. Since the central repository hasn’t been updated since John cloned it, this won’t result in any conflicts and the push will work as expected.

Mary tries to publish her feature

Git Workflows: Push Command Error
Let’s see what happens if Mary tries to push her feature after John has successfully published his changes to the central repository. She can use the exact same push command:
git push origin master
But, since her local history has diverged from the central repository, Git will refuse the request with a rather verbose error message:
error: failed to push some refs to '/path/to/repo.git'
hint: Updates were rejected because the tip of your current branch is behind
hint: its remote counterpart. Merge the remote changes (e.g. 'git pull')
hint: before pushing again.
hint: See the 'Note about fast-forwards' in 'git push --help' for details.
This prevents Mary from overwriting official commits. She needs to pull John’s updates into her repository, integrate them with her local changes, and then try again.

Mary rebases on top of John’s commit(s)

Git Workflows: Git Pull Rebase
Mary can use git pull to incorporate upstream changes into her repository. This command is sort of like svn update—it pulls the entire upstream commit history into Mary’s local repository and tries to integrate it with her local commits:
git pull --rebase origin master
The --rebase option tells Git to move all of Mary’s commits to the tip of the master branch after synchronising it with the changes from the central repository, as shown below:
Git workflows: Rebasing to Master
The pull would still work if you forgot this option, but you would wind up with a superfluous “merge commit” every time someone needed to synchronize with the central repository. For this workflow, it’s always better to rebase instead of generating a merge commit.

Mary resolves a merge conflict

Git Workflows: Rebasing on Commits
Rebasing works by transferring each local commit to the updated master branch one at a time. This means that you catch merge conflicts on a commit-by-commit basis rather than resolving all of them in one massive merge commit. This keeps your commits as focused as possible and makes for a clean project history. In turn, this makes it much easier to figure out where bugs were introduced and, if necessary, to roll back changes with minimal impact on the project.
If Mary and John are working on unrelated features, it’s unlikely that the rebasing process will generate conflicts. But if it does, Git will pause the rebase at the current commit and output the following message, along with some relevant instructions:
CONFLICT (content): Merge conflict in <some-file>
Git workflows: Conflict Resolution
The great thing about Git is that anyone can resolve their own merge conflicts. In our example, Mary would simply run a git status to see where the problem is. Conflicted files will appear in the Unmerged paths section:
# Unmerged paths:
# (use "git reset HEAD <some-file>..." to unstage)
# (use "git add/rm <some-file>..." as appropriate to mark resolution)
#
# both modified: <some-file>
Then, she’ll edit the file(s) to her liking. Once she’s happy with the result, she can stage the file(s) in the usual fashion and let git rebase do the rest:
git add <some-file>
git rebase --continue
And that’s all there is to it. Git will move on to the next commit and repeat the process for any other commits that generate conflicts.
If you get to this point and realize and you have no idea what’s going on, don’t panic. Just execute the following command and you’ll be right back to where you started:
git rebase --abort

Mary successfully publishes her feature

Git Workflows: Synchronize Central Repo
After she’s done synchronizing with the central repository, Mary will be able to publish her changes successfully:
git push origin master


07. Discuss the benefits of CDNs       

1. Your reliability and response times get a huge boost

High performing website equals high conversion and growing sales. Latency and speed issues tend to cripple web businesses and cause damage. A few seconds can mean the difference between a successful conversion or a bounce. A reliable CDN ensures that the load speed is more than optimal and that online transactions are made seamlessly.

2. A CDN enables global reach

Over one third of the world’s population is online, which means that the global use of the internet has increased exponentially over the last 15 years. CDNs provide solutions through cloud acceleration with local POPs. This global reach will eliminate any latency problems that interrupt long-distance online transactions and cause slow load times.

3. A CDN saves a lot of money

Hiring a CDN results in noticeable savings for a business; rather than investing in an infrastructure and separate service providers all across the globe, a global CDN can eliminate the need to pay for costly foreign hosting and thus, save your business a lot of money. A global CDN offers a single platform to handle all of the separate operations, working across numerous regions for a reasonable price. CDNs are also recommended for companies with a tight budget.

4. 100% percent availability

Due to the distribution of assets across many regions, CDNs have automatic server availability sensing mechanisms with instant user redirection. As a result, CDN websites experience 100 percent availability, even during massive power outages, hardware issues or network problems.

5. Decrease server load

The strategic placement of a CDN can decrease the server load on interconnects, public and private peers and backbones, freeing up the overall capacity and decreasing delivery costs. Essentially, the content is spread out across several servers, as opposed to offloading  them onto one large server.

6. 24/7 customer support

Quality CDNs have been known for outstanding customer support. In other words, there is a CS team standby at all time, at your disposal. Whenever something occurs, you have  backup that’s waiting to help you fix your performance related problems. Having a support team on quick dial is a smart business decision – you’re not just paying for a cloud service, you’re paying for a large spectre of services that help your business grow on a global scale.

7. Increase in the number of Concurrent Users

Strategically placing the servers in a CDN can result in high network backbone capacity, which equates to a significant increase in the number of users accessing the network at a given time. For example, where there is a 100 GB/s network backbone with 2 tb/s capacity, only 100 GB/s can be delivered. However, with a CDN, 10 servers will be available at 10 strategic locations and can then provide a total capacity of 10 x 100 GB/s.

8. DDoS protection

Other than inflicting huge economic losses, DDoS attacks can also have a serious impact on the reputation and image of the victimized company or organization. Whenever customers type in their credit card numbers to make a purchase online, they are placing their trust in that business. DDoS attacks are on the rise and new ways of Internet security are being developed; all of which have helped increase the growth of CDNs, as cloud security adds another layer of security.  Cloud solutions are designed to stop an attack before it ever reaches your data center. A CDN will take on the traffic and keep your website up and running. This means you need not be concerned about DDoS attacks impacting your data center, keeping your business’ website safe and sound.

9. Analytics

Content delivery networks can not only deliver content at a fast pace, they can also offer priceless analytical info to discover trends that could lead to advertising sales and reveal the strengths and the weaknesses of your online business. CDNs have the ability to deliver real-time load statistics, optimize capacity per customer, display active regions, indicate which assets are popular, and report viewing details to their customers. These details are extremely important, since usage logs are deactivated once the server source has been added to the CDN. Info analysis shows everything a developer needs to know to further optimize the website. In-depth reporting ultimately leads to performance increase, which results in higher user experience and then further reflects on sales and conversion rates.
08.  How CDNs differ from web hosting servers?
  1. Web Hosting is used to host your website on a server and let users access it over the internet. A content delivery network is about speeding up the access/delivery of your website’s assets to those users.
  2. Traditional web hosting would deliver 100% of your content to the user. If they are located across the world, the user still must wait for the data to be retrieved from where your web server is located. A CDN takes a majority of your static and dynamic content and serves it from across the globe, decreasing download times. Most times, the closer the CDN server is to the web visitor, the faster assets will load for them.
  3. Web Hosting normally refers to one server. A content delivery network refers to a global network of edge servers which distributes your content from a multi-host environment.
         CDN vs Web Hosting without CDN       
         



09.  Identify free and commercial CDNs     

Free CDNs       

1. CloudFlare

Cloudflare-cdn-service
CloudFlare is popularly known as the best free CDN for WordPress users. It is one of the few industry-leading players that actually offer a free plan. Powered by its 115 datacenters, CloudFlare delivers speed, reliability, and protection from basic DDoS attacks. And it’s WordPress plugin is used in over 100,000 active websites.

2. Incapsula

Incapsula-cdn-service
Incapsula provides Application Delivery from the cloud: Global CDN, Website Security, DDoS Protection, Load Balancing & Failover. It takes 5 minutes to activate the service, and they have a great free plan and a WordPress plugin to get correct IP Address information for comments posted to your site.
Features offered by both CloudFlare and Incapsula:
In a nutshell, this is what Incapsula and CloudFlare does:
  • Routes your entire website’s traffic through their globally distributed network of high end servers (This is achieved by a small DNS change)
  • Real-time threat analysis of incoming traffic and blocking the latest web threats including multi-Gigabit DDoS attacks
  • Outgoing traffic is accelerated through their globally powered content delivery network

3. Photon by Jetpack

Photon by Jetpack
To all WordPress users – Jetpack needs no introduction. In their recent improvement of awesomeness, they’ve included a free CDN service (called Photon) that serves your site’s images through their globally powered WordPress.com grid.  To get this service activated, all you have to do is download and install Jetpack and activate its Photon module.
WordPress users need no introduction to Jetpack. One of the coolest features Jetpack has to offer is their free CDN service called Photon. The best part? You don’t need to configure a thing! Simply install the plugin, login with your WordPress.com account and activate the photon module. That’s it. All you images will be offloaded to the WordPress grid that powers over hundreds of thousands of website across the globe.

4. Swarmify

Swarmify, (previously known as SwarmCDN) is a peer-to-peer (P2P) based content delivery network that offers 10GB bandwidth (only for images) in their free plan. To try it out, download the WordPress plugin and give it a go. It is interesting to note that Swarmify works in a slightly different manner:
Let’s say a group of people are browsing your site. Think of them as the first ‘peer’ in P2P. When a new visitor (peer) arrives, the images are served from the already existing group of users (the previous peer). This saves your server’s bandwidth, and improves loading times, since the peers are usually closer to one another. Swarmify also offers video CDN, which is only a part of their paid plan.

Commercial CDNs

Global Reach

With caches at more than 90 sites around the world, Cloud CDN is always close to your users. That means faster page loads and increased engagement. And, unlike most CDNs, your site gets a single IP address that works everywhere, combining global performance with easy management — no regional DNS required. 

SSL Shouldn't Cost Extra

The web is moving to HTTPS, and your cacheable content should, too. With Cloud CDN, you can secure your content using SSL/TLS for no additional charge.

Seamless Integration

Cloud CDN is tightly integrated with the Google Cloud Platform. Enable Cloud CDN with a single checkbox and use the Google Cloud Platform Console and Stackdriver Logging for full visibility into the operation of your site.

Media CDN Support

Cloud CDN includes support for large objects (up to 5 TB) making it the ideal platform to deliver media and gaming to customers around the globe.



10.Discuss the requirements for virtualization
 Virtualization describes a technology in which an application, guest operating
system or data storage is abstracted away from the true underlying hardware or
software. A key use of virtualization technology is server virtualization, which
uses a software layer called a hyper-visor to emulate the underlying hardware.
This often includes the CPU's memory, I/O and network traffic. The guest
operating system, normally interacting with true hardware, is now doing so
with a software emulation of that hardware, and often the guest operating
system has no idea it's on virtualized hardware. While the performance of this
virtual system is not equal to the performance of the operating system running
on true hardware, the concept of virtualization works because most guest
operating systems and applications don't need the full use of the underlying
hardware. This allows for greater flexibility, control and isolation by removing
the dependency on a given hardware platform. While initially meant for server
virtualization, the concept of virtualization has spread to applications,
networks, data and desktops.

11.Discuss and compare the pros and cons of different virtualization techniques in different levels

 The Advantages of Virtualization
1. It is cheaper.
Because virtualization doesn’t require actual hardware components to be used or
installed, IT infrastructures find it to be a cheaper system to implement. There is no
longer a need to dedicate large areas of space and huge monetary investments to
create an on-site resource. You just purchase the license or the access from a thirdparty
provider and begin to work, just as if the hardware were installed locally.

2. It keeps costs predictable.
Because third-party providers typically provide virtualization options, individuals and
corporations can have predictable costs for their information technology needs. For
example: the cost of a Dell PowerEdge T330 Tower Server, at the time of writing, is
$1,279 direct from the manufacturer. In comparison, services provided by Bluehost
Web Hosting can be a slow as $2.95 per month.

3. It reduces the workload.
Most virtualization providers automatically update their hardware and software that
will be utilized. Instead of sending people to do these updates locally, they are
installed by the third-party provider. This allows local IT professionals to focus on
other tasks and saves even more money for individuals or corporations.

4. It offers a better uptime.
Thanks to virtualization technologies, uptime has improved dramatically. Some
providers offer an uptime that is 99.9999%. Even budget-friendly providers offer
uptime at 99.99% today.

5. It allows for faster deployment of resources.
Resource provisioning is fast and simple when virtualization is being used. There is
no longer a need to set up physical machines, create local networks, or install other
information technology components. As long as there is at least one point of access to
the virtual environment, it can be spread to the rest of the organization.

6. It promotes digital entrepreneurship.
Before virtualization occurred on a large scale, digital entrepreneurship was virtually
impossible for the average person. Thanks to the various platforms, servers, and
storage devices that are available today, almost anyone can start their own side hustle
or become a business owner. Sites like Fiverr and UpWork make it possible for
anyone to set a shingle and begin finding some work to do.

7. It provides energy savings.
For most individuals and corporations, virtualization is an energy-efficient system.
Because there aren’t local hardware or software options being utilized, energy
consumption rates can be lowered. Instead of paying for the cooling costs of a data
center and the operational costs of equipment, funds can be used for other operational
expenditures over time to improve virtualization’s overall ROI.

 The Disadvantages of Virtualization
1. It can have a high cost of implementation.
The cost for the average individual or business when virtualization is being considered
will be quite low. For the providers of a virtualization environment, however, the
implementation costs can be quite high. Hardware and software are required at some
point and that means devices must either be developed, manufactured, or purchased
for implementation.

2. It still has limitations.
Not every application or server is going to work within an environment of
virtualization. That means an individual or corporation may require a hybrid system to
function properly. This still saves time and money in the long run, but since not every
vendor supports virtualization and some may stop supporting it after initially starting
it, there is always a level of uncertainty when fully implementing this type of system.

3. It creates a security risk.
Information is our modern currency. If you have it, you can make money. If you don’t
have it, you’ll be ignored. Because data is crucial to the success of a business, it is
targeted frequently. The average cost of a data security breach in 2017, according to a
report published by the Ponemon Institute, was $3.62 million. For perspective: the
chances of being struck by lightning are about 1 in a million. The chances of
experiencing a data breach while using virtualization? 1 in 4.

4. It creates an availability issue.
The primary concern that many have with virtualization is what will happen to their
work should their assets not be available. If an organization cannot connect to their
data for an extended period of time, they will struggle to compete in their industry.
And, since availability is controlled by third-party providers, the ability to stay
connected in not in one’s control with virtualization.

5. It creates a scalability issue.
Although you can grow a business or opportunity quickly because of virtualization,
you may not be able to become as large as you’d like. You may also be required to be
larger than you want to be when first starting out. Because many entities share the
same resources, growth creates lag within a virtualization network. One large
presence can take resources away from several smaller businesses and there would be
nothing anyone could do about it.

6. It requires several links in a chain that must work together cohesively.
If you have local equipment, then you are in full control of what you can do. With
virtualization, you lose that control because several links must work together to
perform the same task. Let’s using the example of saving a document file. With a
local storage device, like a flash drive or HDD, you can save the file immediately and
even create a backup. Using virtualization, your ISP connection would need to be
valid. Your LAN or Wi-Fi would need to be working. Your online storage option
would need to be available. If any of those are not working, then you’re not saving
that file.

7. It takes time.
Although you save time during the implementation phases of virtualization, it costs
users time over the long-run when compared to local systems. That is because there
are extra steps that must be followed to generate the desired result.
The advantages and disadvantages of virtualization show us that it can be a useful tool
for individuals, SMBs, entrepreneurs, and corporations when it is used properly.
Because it is so easy to use, however, some administrators begin adding new servers
or storage for everything and that creates sprawl. By staying disciplined and aware of
communication issues, many of the disadvantages can be tempered, which is why this
is such an effective modern system.

12. Identify popular implementations and available tools for each level of visualization
Implementations
  1. Speak to a specific audience
  2. Choose the right visual
  3. Provide context
  4. Keep things simple and digestible
  5. Design for user engagement
Available Tools
Tableau is often regarded as the grand master of data visualization software and for good reason. Tableau has a very large customer base of 57,000+ accounts across many industries due to its simplicity of use and ability to produce interactive visualizations far beyond those provided by general BI solutions. It is particularly well suited to handling the huge and very fast-changing datasets which are used in Big Data operations, including artificial intelligence and machine learning applications, thanks to integration with a large number of advanced database solutions including Hadoop, Amazon AWS, My SQL, SAP and Teradata. Extensive research and testing has gone into enabling Tableau to create graphics and visualizations as efficiently as possible, and to make them easy for humans to understand.
Qlik with their Qlikview tool is the other major player in this space and Tableau’s biggest competitor. The vendor has over 40,000 customer accounts across over 100 countries, and those that use it frequently cite its highly customizable setup and wide feature range as a key advantage. This however can mean that it takes more time to get to grips with and use it to its full potential. In addition to its data visualization capabilities Qlikview offers powerful business intelligence, analytics and enterprise reporting capabilities and I particularly like the clean and clutter-free user interface. Qlikview is commonly used alongside its sister package, Qliksense, which handles data exploration and discovery. There is also a strong community and there are plenty of third-party resources available online to help new users understand how to integrate it in their projects.
This is a very widely-used, JavaScript-based charting and visualization package that has established itself as one of the leaders in the paid-for market. It can produce 90 different chart types and integrates with a large number of platforms and frameworks giving a great deal of flexibility. One feature that has helped make FusionCharts very popular is that rather than having to start each new visualization from scratch, users can pick from a range of “live” example templates, simply plugging in their own data sources as needed.
Like FusionCharts this also requires a licence for commercial use, although it can be used freely as a trial, non-commercial or for personal use. Its website claims that it is used by 72 of the world’s 100 largest companies and it is often chosen when a fast and flexible solution must be rolled out, with a minimum need for specialist data visualization training before it can be put to work. A key to its success has been its focus on cross-browser support, meaning anyone can view and run its interactive visualizations, which is not always true with newer platforms.
Datawrapper is increasingly becoming a popular choice, particularly among media organizations which frequently use it to create charts and present statistics. It has a simple, clear interface that makes it very easy to upload csv data and create straightforward charts, and also maps, that can quickly be embedded into reports.
Plotly enables more complex and sophisticated visualizations, thanks to its integration with analytics-oriented programming languages such as Python, R and Matlab. It is built on top of the open source d3.js visualization libraries for JavaScript, but this commercial package (with a free non-commercial licence available) adds layers of user-friendliness and support as well as inbuilt support for APIs such as Salesforce.
Sisense provides a full stack analytics platform but its visualization capabilities provide a simple-to-use drag and drop interface which allow charts and more complex graphics, as well as interactive visualizations, to be created with a minimum of hassle. It enables multiple sources of data to be gathered into one easily accessed repositories where it can be queried through dashboards instantaneously, even across Big Data-sized sets. Dashboards can then be shared across organizations ensuring even non technically-minded staff can find the answers they need to their problems.

13.  What is the hypervisor and what is the role of it?

 Hypervisors provide several benefits to the enterprise data center. First, the
ability of a physical host system to run multiple guest VMs can vastly improve
the utilization of the underlying hardware. Where physical (nonvirtualized)
servers might only host one operating system and application, a hypervisor
virtualizes the server, allowing the system to host multiple VM instances --
each running an independent operating system and application -- on the same
physical system using far more of the system's available compute resources.

 VMs are also very mobile. The abstraction that takes place in a hypervisor also
makes the VM independent of the underlying hardware. Traditional software
can be tightly coupled to the underlying server hardware, meaning that moving
the application to another server requires time-consuming and error-prone
reinstallation and reconfiguration of the application. By comparison, a
hypervisor makes the underlying hardware details irrelevant to the VMs. This
allows any VMs to be moved or migrated between any local or
remote virtualized servers -- with sufficient computing resources available --
almost at-will with effectively zero disruption to the VM; a feature often
termed live migration.

 VMs are also logically isolated from each other -- even though they run on the
same physical machine. In effect, a VM has no native knowledge or
dependence on any other VMs. An error, crash or malware attack on one VM
does not proliferate to other VMs on the same or other machines. This makes
hypervisor technology extremely secure.

 Finally, VMs are easier to protect than traditional applications. A physical
application typically needs to be first quiesced and then backed up using a
time-consuming process that results in substantial downtime for the application.
A VM is essentially little more than code operating in a server's memory
space. Snapshot tools can quickly capture the content of that VM's memory
space and save it to disk in moments -- usually without quiescing the
application at all. Each snapshot captures a point-in-time image of the VM
which can be quickly recalled to restore the VM on demand.

 Understanding the Role of a Hypervisor

 The explanation of a hypervisor up to this point has been fairly simple: it is a
layer of software that sits between the hardware and the one or more virtual
machines that it supports. Its job is also fairly simple. The three characteristics
defined by Popek and Goldberg illustrate these tasks:
o Provide an environment identical to the physical environment
o Provide that environment with minimal performance cost
o Retain complete control of the system resources

14.How does the emulation is different from VMs?

 Emulator emulates the hardware completely in software. Therefore you can
play Amiga or SNES games on an emulator in PC although these consoles had
completely different hardware and processor. It’s not just for legacy gaming -
you can run an operating system on an emulator on an emulated hardware that
no longer exists (or is difficult to obtain). Another typical use-case for emulator
can be cross-platform compilation of code (on different platforms x86, MIPS,
32-bit ARMv7, ARMv8, PowerPC, SPARC, ETRAX CRIS, MicroBlaze ..)
using just one set of hardware.

 Emulator examples: FS-UAE Amiga Emulator, SNES9X (games) and
Bosch, QEMU (OS).

 Virtualization - is a technique that exposes a virtual resource (such as CPU,
disk, ram, nic, etc.) by virtualizing existing physical hardware. For example
there are specific CPU instruction sets designed to virtualize a CPU into more
vCPUs. Main difference here is for example that you’re unable to provide a
virtual machine with 16 bit processor - because you physically don’t have it. 16
bit procesor can be emulated on x64 Intel (AMD) CPU but cannot be
virtualized. Same goes for the rest of the hardware.

 Sometimes emulators and hypervisors are used in combination - for
example KVM/QEMU - KVM provides CPU, Memory, disk and QEMU
provides peripherals such as keyboard, mouse, monitor, NIC, usb bus,
etc. - that you may not even have physically connected to the PC/Server
you’re using for virtualization.

15.Compare and contrast the VMs and containers/ dockers, indicating their advantages and disadvantages

What are VMs?
A virtual machine (VM) is an emulation of a computer system. Put simply, it makes it
possible to run what appear to be many separate computers on hardware that is
actually one computer.

Benefits of VMs
 All OS resources available to apps
 Established management tools
 Established security tools
 Better known security controls
What are Containers?
 With containers, instead of virtualizing the underlying computer like a virtual
machine (VM), just the OS is virtualized.
 Containers sit on top of a physical server and its host OS — typically Linux or
Windows. Each container shares the host OS kernel and, usually, the binaries
and libraries, too. Shared components are read-only. Sharing OS resources such
as libraries significantly reduces the need to reproduce the operating system
code, and means that a server can run multiple workloads with a single
operating system installation. Containers are thus exceptionally light — they
are only megabytes in size and take just seconds to start. Compared to
containers, VMs take minutes to run and are an order of magnitude larger than
an equivalent container.

Benefits of Containers
 Reduced IT management resources
 Reduced size of snapshots
 Quicker spinning up apps
 Reduced & simplified security updates
 Less code to transfer, migrate, upload workloads
Uses for VMs vs Uses for Containers
Both containers and VMs have benefits and drawbacks, and the ultimate decision will
depend on your specific needs, but there are some general rules of thumb.
 VMs are a better choice for running apps that require all of the operating
system’s resources and functionality, when you need to run multiple
applications on servers, or have a wide variety of operating systems to manage.
 Containers are a better choice when your biggest priority is maximizing the
number of applications running on a minimal number of servers.



Comments

Popular posts from this blog