GA

GA-C

Translate

Friday, 21 September 2018

Difference Between Containers and Virtual Machines?



Hypervisors are a way to manage virtual machines (VMs) on processors that support the virtual replication of hardware. Not all processors have this type of hardware—it's typically found in mid- to high-end microprocessors. It's standard fare on server processors like Intel's Xeon and found on most application processors such as the Arm Cortex-A series. Typically, a VM will run any software that runs on the bare metal hardware while providing isolation from the real hardware. Type 1 hypervisors run on bare metal, while Type 2s have an underlying operating system (see figure, a).
Containers vs. VMs
Containers also provide a way to isolate applications and provide a virtual platform for applications to run on (see figure, b). Two main differences exist between a container and a hypervisor system.
The container's system requires an underlying operating system that provides the basic services to all of the containerized applications using virtual-memory support for isolation. A hypervisor, on the other hand, runs VMs that have their own operating system using hardware VM support. Container systems have a lower overhead than VMs and container systems typically target environments where thousands of containers are in play. Container systems usually provide service isolation between containers. As a result, container services such as file systems or network support can have limited resource access.
There is also something called para-virtualization, which is sort of a mix between the two approaches. It uses virtual-memory support for isolation, but it requires special device drivers in the VM that are linked through the hypervisor to the underlying operating system, which in turn provides the device services.
A hardware VM system forces any communication with a VM to go through the hardware. Some systems allow real hardware to map directly to a VM's environment, enabling the VM's device driver to directly handle the hardware. Hardware I/O virtualization also allows a single hardware device like an Ethernet adapter to present multiple, virtual instances of itself so that multiple VMs can manage their instance directly.
Virtual machines (VM) are managed by a hypervisor and utilize VM hardware (a), while container systems provide operating system services from the underlying host and isolate the applications using virtual-memory hardware (b).
In a nutshell, a VM provides an abstract machine that uses device drivers targeting the abstract machine, while a container provides an abstract OS. A para-virtualized VM environment provides an abstract hardware abstraction layer (HAL) that requires HAL-specific device drivers. Applications running in a container environment share an underlying operating system, while VM systems can run different operating systems. Typically a VM will host multiple applications whose mix may change over time versus a container that will normally have a single application. However, it's possible to have a fixed set of applications in a single container.
Virtual-machine technology is well-known in the embedded community, but containers tend to be the new kid on the block, so they warrant a bit more coverage in this article. Containers have been the rage on servers and the cloud, with companies like Facebook and Google investing heavily in container technology. For example, each Google Docs service gets a container per user instance.
A number of container technologies are available, with Linux leading the charge. One of the more popular platforms is Docker, which is now based on Linux libcontainer. Actually, Docker is a management system that's used to create, manage, and monitor Linux containers. Ansible is another container-management system favored by Red Hat.
Microsoft is a late arrival to the container approach, but its Windows Containers is a way to provide container services on a Windows platform. Of course, it's possible to host a Linux container service as a VM on Microsoft server platforms like Hyper-V. Container-management systems like Docker and Ansible can manage Windows-based servers providing container support.
Based-File Systems, Virtual Containers and Thin VMs
Containers provide a number of advantages over VMs, although some can be addressed using other techniques. One advantage is the low overhead of containers and, therefore, the ability to start new containers quickly. This is because starting the underlying OS in a VM takes time, memory, and the space needed for the VM disk storage. It may be difficult to address the time issue, but the other two can be addressed.
The easiest is the VM disk storage. Normally, a VM needs at least one unique image file for every running instance of a VM. It contains the OS and often the application code and data as well. Much of this is common among similar VMs. In the case of a raw image, a complete copy of the file is needed for each instance. This could require copying multiple gigabytes per instance.
The alternative is to use a based-file format like QEMU's qcow2, which is supported by Linux's KVM virtual-machine manager. In this case, an initial instance of the VM is set up and the operating system is installed possibly with additional applications. The VM is then terminated and the resulting file is used as the base for subsequent qcow2 files.
Setting up one of these subsequent files takes minimal time and space. It can then be used by a new VM, where changes made to the disk are recorded in the new file. Typically, the based file will contain information that will not change in the new file, although doing something like updating the operating system may cause the new file to grow significantly. This masks the original file to the point where the original will not be referenced, since all of its data has been overwritten.
The chain of based files can continue so that there may be a starting image with just the operating system. The next in the chain may add services like a database application. Another might add a web server. Starting up a new instance of a database server would build a new file starting from the image with the database in it, while a web server with database server would start from the database/web-server file.
The use of based files addressed duplication of file storage. For memory deduplication, we need to turn to the hypervisor. Some hypervisors can determine when particular memory blocks are duplicates, such as the underlying OS code assuming two or more VMs use the same OS with the exact same code. This approach can significantly reduce the amount of memory required, depending on the size of shared code or data that can be identified.
An issue with containers is the requirement that the underlying OS be the same for all containers being supported. This is often a happy occurrence for embedded systems in which applications can be planned to use the same OS. Of course, this isn't always the case; this can even be an issue in the cloud. The answer is to run the container system in its own VM. In fact, the management tools can handle this, because a collection of services/containers will often be designed to run on a common container platform.
Finally, there's the idea of thin VMs. These VMs have a minimal OS and run a single application. Many times, the OS forwards most of the service requests, such as file access, to a network server. Stripped-down versions of standard operating systems like Linux are substantially smaller. In the extreme case, the OS support is actually linked into the application so that the VM is just running a single program. For embedded applications, the network communication may be done via shared memory, providing a quick way to communicate with other VMs on the same system.  
No one approach addresses all embedded applications, and there may be more than one reasonable alternative to deploying multiple program instances. It will be more critical to consider the alternatives when designing a system as the world moves from single-core platforms to ones with many, many cores.
source

Thursday, 20 September 2018

WiFi 802.11ax - what is Resource Unit?

A 20 MHz OFDMA channel consists of a total of 256 subcarriers (tones). These tones are grouped into smaller sub-channels, known as resource units (RUs). As shown in Figure 1, when subdividing a 20 MHz channel, an 802.11ax access point designates 26, 52, 106, and 242 subcarrier resource units (RUs), which equates roughly to 2 MHz, 4 MHz, 8 MHz, and 20 MHz channels, respectively. The 802.11ax access point dictates how many RUs are used within a 20 MHz channel, and different combinations can be used.
OFDMA
Figure 1- OFDMA resource units – 20 MHz channel
An 802.11ax AP may allocate the whole channel to only one client
 at a time, or it may partition the OFDMA channel to serve multiple clients simultaneously. For example, an 802.11ax AP could simultaneously communicate with one 802.11ax client using 8 MHz of frequency space while communicating with three additional 802.11ax clients using 4 MHz sub-channels. These simultaneous communications can be either downlink or uplink.
OFDMA
Figure 2 – OFDMA transmissions over time
In the example shown in Figure 2, the 802.11ax AP first simultaneously transmits downlink to 802.11ax clients 1 and 2. The 20 MHz OFDMA channel is effectively partitioned into two sub-channels. Remember that an ODFMA 20 MHz channel has a total of 256 subcarriers; however, the AP simultaneously transmitted to clients 1 and 2 using two different 106-tone resource units. In the second transmission, the AP simultaneously transmits downlink to clients 3, 4, 5, and 6. In this case, the ODFMA channel had to be partitioned into four separate 52-tone sub-channels. In the third transmission, the AP uses a single 242-tone resource unit to transmit downlink to a single client (5). Using a single 242-tone resource unit is effectively using the entire 20 MHz channel. In the fourth transmission, the AP simultaneously transmits downlink to clients 4 and 6 using two 106-tone resource units. In the fifth transmission, the AP once again transmits only downlink to a single client, with a single 242-tone RU utilizing the entire 20 MHZ channel. In the sixth transmission, the AP simultaneously transmits downlink to clients 3, 4, and 6. In this instance, the 20 MHz channel is partitioned into three sub-channels; two 52-tone RUs are used for clients 3 and 4, and a 106-tone RU is used for client 6.
For backward compatibility, 802.11ax radios will still support OFDM. Keep in mind that 802.11 management and control frames will still be transmitted at a basic data rate using OFDM technology that 802.11a/g/n/ac radios can understand. Therefore, the transmission of management and control frames will be transmitted using the standard 64 OFDM subcarriers of an entire primary 20 MHz channel. OFDMA is only for 802.11 data frame exchanges between 802.11ax APs and 802.11ax clients. Please check back every week and read future 802.11ax blogs, where we will discuss in more detail the mechanisms of ODMA including resource unit allocation and trigger frames. We also discuss the differences between downlink OFDMA and uplink OFDMA.

OFDM & OFDMA in context of WiFi 802.11ax technology.

802.11a/g/n/ac radios currently use Orthogonal Frequency Division Multiplexing (OFDM) for single-user transmissions on an 802.11 frequency. 802.11ax radios can utilize orthogonal frequency-division multiple access (OFDMA) which is a multi-user version of the OFDM digital-modulation technology. OFDMA subdivides a channel into smaller frequency allocations, called resource units (RUs). By subdividing the channel, parallel transmissions of small frames to multiple users can happen simultaneously.
Think of OFDMA as a technology that partitions a channel into smaller sub-channels so that simultaneous multiple-user transmissions can occur. For example, a traditional 20 MHz channel might be partitioned into as many as nine smaller sub-channels. Using OFDMA, an 802.11ax AP could simultaneously transmit small frames to nine 802.11ax clients. OFDMA is a much more efficient use of the medium for smaller frames. The simultaneous transmission cuts down on excessive overhead at the MAC sublayer as well as medium contention overhead. The goal of OFDMA is better use of the available frequency space. OFDMA technology has been time-tested with other RF communications. For example, OFDMA is used for downlink LTE cellular radio communication.
To illustrate the difference between OFDM and OFDMA, please reference both Figures 1 and 2. When an 802.11n/ac AP transmits downlink to 802.11n/ac clients on an OFDM channel, the entire frequency space of the channel is used for each independent downlink transmission. In the example shown in Figure 1, the AP transmits to six clients independently over time. All 64 subcarriers are used when an OFDM radio transmits on a 20 MHz channel. In other words, the entire 20 MHz channel is needed for the downlink communication between the AP and a single OFDM client. The same holds true for any uplink transmission from a single 802.11n/ac client to the 802.11n/ac AP. The entire 20 MHz OFDM channel is needed for the client transmission to the AP.
Figure 1- OFDM transmissions over time
 As shown in Figure 2, an 802.11ax AP can partition a 20 MHz OFDMA channel into smaller sub-channels for multiple clients on a continuous basis for simultaneous downlink transmissions. In a future blog, you will learn that an 802.11ax AP can also synchronize 802.11ax clients for simultaneous uplink transmissions. It should be noted that the rules of medium contention still apply. The AP still has to compete against legacy 802.11 stations for a transmission opportunity (TXOP). Once the AP has a TXOP, the AP is then in control of up to nine 802.11ax client stations for either downlink or uplink transmissions. The number of resource units (RUs used can vary on a per TXOP basis.
Figure 2- OFDMA transmissions over time

Friday, 14 September 2018

AT&T's CBRS perspectives



CBRS spectrum has been a hot topic in the wireless industry for some time. Sometimes referred to as the "Innovation Band", there has been a lot of discussion on how operators will use this spectrum and what kind of benefits customers will see.

What is CBRS and "Shared Spectrum?"

CBRS has been defined by the FCC as a shared spectrum band (3.55-3.7 GHz) to enable efficient use of finite spectrum resources. With shared spectrum, an operator isn't required to buy and own spectrum. Spectrum can be either unlicensed or an operator could purchase a "temporary" license that would be valid for a certain period of time. This new "shared" ownership concept will allow operators to deploy services over this band quicker and more efficiently. 

However, there are some additional considerations to take into account when it comes to CBRS spectrum. Since it is a shared spectrum band with the incumbent users like the military– it requires a spectrum sharing structure called a Spectrum Access System (SAS).  SAS providers will allow carriers to use the 3.5 GHz band without creating interference. Using a SAS will let the incumbents use the 3.5 GHz band when needed, but then frees the spectrum for commercial use at all other times.

How will AT&T use CBRS?

We've been researching and considering the best use cases for CBRS for some time. We think there are a number of compelling use cases for this spectrum including private LTE networks for enterprises, utilizing it as a "neutral host band" in large indoor venues like stadiums and smart factories, and deploy fixed wireless internet (FWI) in rural and underserved areas. In the future, we'll look to migrate to 5G over CBRS spectrum which can potentially be used for 5G densification in urban areas.  

To meet our rapidly growing demand for FWI, today we announced we are looking at initially deploying Next Generation Fixed Wireless Internet using the CBRS spectrum band. This innovative spectrum band will allow us to meet our fixed wireless expansion commitments and deliver an internet connection to more Americans in rural and underserved communities.

We plan to primarily use the CBRS solution to deliver home and enterprise broadband services in suburban and rural locations. Millions of US households lack access to broadband service, and in many cases a fixed wireless access architecture can cost-effectively reach homes and businesses where fiber cannot.

Additionally, the CBRS solution we're planning to deploy uses a Massive MIMO architecture to enable faster downloads, greater network capacity and an enhanced wireless experience.

What's next for CBRS?

While the use cases for CBRS spectrum are exciting, there are still a number of things that need to happen before we can start commercial deployments. Our next step is to start testing CBRS equipment in the lab late this year.  While deployments may proceed with General Authorized Access (GAA) commercial operation in 2019, we're looking forward to participating in the Priority Access License (PAL) auction.  We look forward to the final FCC structure and participating in auction at the appropriate time.  

AT&T Edge Computing Test Zone: What's the learning so far and whats to look ahead?



In early 2018, the AT&T Foundry launched an edge computing test zone in Palo Alto, CA to experiment with emerging applications upon this new network infrastructure paradigm. Edge computing is the act of moving storage and processing capabilities to the perimeter of the network – or geographically closer to the end-user. 

At&T said about the challenges appeared, as next-gen applications require increasing processing power on mobile devices, our team engaged with Silicon Valley companies facing challenges that edge computing could potentially address. 

We first turn our focus to media applications such as augmented reality, virtual reality, and cloud-driven gaming. We've now completed our first experiment with GridRaster. The goal of this phase of our collaboration was to quantitatively understand how improved network performance metrics, such as delay and packet jitter, would translate to improvements in application performance metrics, such as motion-to-photon-latency and frame loss – yielding a better experience for the end user.

As anticipated, the edge configuration presented the most favorable outcomes for application performance. However, our experimentation uncovered additional nuances. We believe that network optimization is critical to enable mobile, cloud-based immersive media. But that's not enough. First, we believe companies in this ecosystem need to streamline functions throughout the entire capture and rendering pipeline and devise new techniques to distribute functions between the cloud and mobile devices. Second, we discovered that the most notable benefits of edge computing come from delay predictability, rather than the amount of delay itself. We therefore believe that cloud-based immersive media applications will likely benefit from network functions and applications working more synergistically in real-time.

Based on these insights, the AT&T Foundry plans to take a deeper dive with application developers to re-imagine and re-architect how these immersive media applications are designed and implemented.

We'll be collaborating with NVIDIA to experiment with new ways of delivering experiences over 5G and edge computing technology. At AT&T Spark, the AT&T Foundry, Ericsson and NVIDIA will unveil a GeForce NOW edge computing demo that will showcase Shadow of the Tomb Raider using the power of cloud gaming over a 5G network. It's a glimpse at what the future holds and a prime example of what we can achieve through this collaboration.

In that collaborative vein, AT&T also recently launched Akraino Edge Stack via the Linux Foundation. Akraino is an open source software stack that supports high-availability cloud services optimized for edge computing systems and applications. The project recently moved from formation into execution and has over a dozen members. AT&T Spark will also host a demo of Akraino running a VR experience and leveraging artificial intelligence.                                                                              

The AT&T Foundry is also expanding its edge test zone footprint to cover the full Bay Area, allowing for increased application mobility and broader collaboration potential. We will continue to evaluate potential use cases from prospective ecosystem partners that could benefit from edge computing.  This includes continuing to find ways of enhancing mobile immersive media experiences, as well as testing future 5G applications such as self-driving cars.

The AT&T Foundry's rapid innovation model allows us to pivot based on key learnings and better direct our testing to get to the core of how edge computing can provide tangible benefits and value to current and future use cases. With 5G powering the next evolution of our network, edge computing will continue to be at the forefront of how we provide the ultimate user experience for next-gen applications.