GA

GA-C

Translate

Recent Most Popular

Friday, 28 September 2018

ETSI - MEC or MEAC is the kingpin for 5G KPI achievements



In a recent white paper, the European Telecommunications Standards Institute outlined the role of multi-access edge computing in 5G. 

"Edge computing is acknowledged as one of the key pillars for meeting the demanding Key Performance Indicators of 5G, especially as far as low latency and bandwidth efficiency are concerned," the white paper noted. "However, not only is edge computing in telecommunications networks a technical enabler for the demanding KPIs, it also plays an essential role in the transformation of the telecommunications business, where telecommunications networks are turning into versatile service platforms for industry and other specific customer segments."

ETSI White paper

Tuesday, 25 September 2018

Need of Indian Broadband and where the future lies?

Just a couple of years back when reliance jio debuted into Indian telecom scenario with LTE offerings, things went into surprises for the end users, not only commercial surprises like free offerings, in fact that was more surprising for incumbents, but the power of data rates that LTE offers. As people who were not accustomed to application like skype etc, first time saw video calls with their handsets, and could see their near and dear far apart over a video on their hand sets. It was all exhilarating and exciting with boost of free data available to use. The most admiring part of the story remains that Jio's gigantic plan taken the Indian telecom into next phase.

As we know that technologies are continuously evolving, not only those which transmitting data, but those too, generating or devouring data. So there is increase in demand as well as supply and intermediaries are coping with that too. In a comprehensive end to end view it's about whole ecosystem has emerged to create new services and infrastructures to deliver more surprises, nevertheless to say that application like AR/VR and Autonomous vehicles and many such will keep the end user into an awe, in coming time.

The core of this all lies at ICT, as the new phase of ICT is being taken with the notion of Industry 4.0, and reach of the ICT lies with Access of Broadband, notwithstanding be it wireless or wireline, that's the commercial things.

Indian Perspectives…..

Broadband in India has not yet been taken seriously, in sense of social empowerment and access of remote and last connect to main stream. Reliance jio came with LTE as an telecom unicron, as India is a mass market and whole world is moving to mass market instead, gone are days where industries used to capture market of elites.

The success of reliance lies on the fact of its understanding of Indian market and plan the things big -- covering the whole gamut. Like not only providing network services and infrastructure but also enabling the common man to access it through free data and cheap mobile handsets. Moreover, not exaggerating by the understanding that nothing is actually free, Indian public can be utilized well for heavy load testing - so instead of circulating test phones to enterprise work force, why not to people itself for free, an another aspect to see reliance wisdom.

But the question is not to set up an incumbent here but the social reform, as Govt regulatory organizations are also of the mindset to set up the incumbents only, completely overlooking power of ICT and most importantly broadband. The data rates and call drop issues are crippling the real power of ICT and handicapping it without right kind of regulation and control over the services, which breaks the reliability and sustainability with the trust for new kind of services.

Leveraging Govt Infra……

The Govt initiatives like "BharatNet",  is about providing information highways through fiber network across the country, and also like Digital India, Make-in-India etc to create a whole ecosystem, are viable ingredients. 5G from Govt has been put to adapt the technology at large scale, DOT is setting a test bed for incubating it. These are no doubt encouraging and remarkable moves and yielding the outcome as well, but more is there needed to be done.

As optical network through "BharatNet" is proving a good infrastructure for backhaul connectivity to last mile access, there are other options being and would be explored too. Recently Reliance jio tied up with Hughes satellite systems to use their satellite to provide backhaul connectivity to last mile LTE access at remote part of India.

The Project like "BharatNet" is the most important one to set the base or minimum viable infrastructure for India broadband, though focus must move on satellite or other technologies too. The Forums like BIF (Broadband India Forum) has been catalyzing for the use of satellite backhaul. This is encouraging new entrepreneurs to come up. In recent time, like in a year or two, its being closely observed that some businesses are being cropping up based on such infra and also being felicitated by cheap china supplier for their fulfillments.

Leveraging WiFi ……

In this new breed of last mile access providers, WiFi has emerged as prominent technology, with ease of deployment and low cost of management. We have seen many retail like business mushrooming, not only in rural but in urban India too. So there are not incumbents like Reliance Jio but also the people in the propensity of grocery or garment businesses taking advantage of Indian perspective for their greedy gains in all their capacity.

Indian perspective is not mature enough to differentiate the data, As we are habitual to go with the loose control services so there is less viability, at least in current scenarios, for strict data classification, a fast internet is enough to deliver almost all the contemporary services with manageable user experiences.

WiFi Security, in specific, and network security in general are something of paramount importance, but that is not much in consideration with the network service providers as their prominent motto is gain the business, and also the general public is not conscious of the security on their network. This is not only from the point of view of ignorance of consumers but our social behavioral traits too, as we have not been part of World War one or two.

So, where the future lies?

Future lies with us, WiFi is a technology, has been deeply rooted and well being there, not only coping and sustaining with other wireless access technologies but beating them too. WiFi is existing standards of 802.11ac and upcoming 802.11ax are quite sufficiently empowered to fulfill Indian broadband perspective well in advance till the 5G takes its foot prints. Even if the giants has its focus on 5G its going to be FWA cases in recent times and fulfill the backhaul requirements, in addition to fiber and satellite.

WiFi has the power to fulfill not only broadband requirements but also IOT and IIOT specific requirements too. Therefore WiFi in Indian as well as many Asian perspectives is a holy grail.

Therefore Thinking of WiFi is not about providing few meagre objectives like bringing employments in form of retail business, or providing a connectivity for common services through PDO or CSC, but need a BiG Idea to invest and deliver a country wide side by side associative and affordable connectivity for Broadband Access to unconnected or to Bring the remote to main stream…..It's not a technical challenge it's a will to bring social impact….deliver WiFi in right way with right business cases in comprehensive and coordinated way.

There must be industry associations or right forums to come forward for the kind of initiatives, TRAI's PDO is not going to create the required framework, there need to be an integrated solutions with a Big Idea.


Saurabh Verma
Consultant & Founder
Fundarc Communication (xgnlab)
Noida, India - 201301
M:7838962939/9654235169
saurabhverma@xgnlab.com
www.xgnlab.com



Friday, 21 September 2018

Difference Between Containers and Virtual Machines?



Hypervisors are a way to manage virtual machines (VMs) on processors that support the virtual replication of hardware. Not all processors have this type of hardware—it's typically found in mid- to high-end microprocessors. It's standard fare on server processors like Intel's Xeon and found on most application processors such as the Arm Cortex-A series. Typically, a VM will run any software that runs on the bare metal hardware while providing isolation from the real hardware. Type 1 hypervisors run on bare metal, while Type 2s have an underlying operating system (see figure, a).
Containers vs. VMs
Containers also provide a way to isolate applications and provide a virtual platform for applications to run on (see figure, b). Two main differences exist between a container and a hypervisor system.
The container's system requires an underlying operating system that provides the basic services to all of the containerized applications using virtual-memory support for isolation. A hypervisor, on the other hand, runs VMs that have their own operating system using hardware VM support. Container systems have a lower overhead than VMs and container systems typically target environments where thousands of containers are in play. Container systems usually provide service isolation between containers. As a result, container services such as file systems or network support can have limited resource access.
There is also something called para-virtualization, which is sort of a mix between the two approaches. It uses virtual-memory support for isolation, but it requires special device drivers in the VM that are linked through the hypervisor to the underlying operating system, which in turn provides the device services.
A hardware VM system forces any communication with a VM to go through the hardware. Some systems allow real hardware to map directly to a VM's environment, enabling the VM's device driver to directly handle the hardware. Hardware I/O virtualization also allows a single hardware device like an Ethernet adapter to present multiple, virtual instances of itself so that multiple VMs can manage their instance directly.
Virtual machines (VM) are managed by a hypervisor and utilize VM hardware (a), while container systems provide operating system services from the underlying host and isolate the applications using virtual-memory hardware (b).
In a nutshell, a VM provides an abstract machine that uses device drivers targeting the abstract machine, while a container provides an abstract OS. A para-virtualized VM environment provides an abstract hardware abstraction layer (HAL) that requires HAL-specific device drivers. Applications running in a container environment share an underlying operating system, while VM systems can run different operating systems. Typically a VM will host multiple applications whose mix may change over time versus a container that will normally have a single application. However, it's possible to have a fixed set of applications in a single container.
Virtual-machine technology is well-known in the embedded community, but containers tend to be the new kid on the block, so they warrant a bit more coverage in this article. Containers have been the rage on servers and the cloud, with companies like Facebook and Google investing heavily in container technology. For example, each Google Docs service gets a container per user instance.
A number of container technologies are available, with Linux leading the charge. One of the more popular platforms is Docker, which is now based on Linux libcontainer. Actually, Docker is a management system that's used to create, manage, and monitor Linux containers. Ansible is another container-management system favored by Red Hat.
Microsoft is a late arrival to the container approach, but its Windows Containers is a way to provide container services on a Windows platform. Of course, it's possible to host a Linux container service as a VM on Microsoft server platforms like Hyper-V. Container-management systems like Docker and Ansible can manage Windows-based servers providing container support.
Based-File Systems, Virtual Containers and Thin VMs
Containers provide a number of advantages over VMs, although some can be addressed using other techniques. One advantage is the low overhead of containers and, therefore, the ability to start new containers quickly. This is because starting the underlying OS in a VM takes time, memory, and the space needed for the VM disk storage. It may be difficult to address the time issue, but the other two can be addressed.
The easiest is the VM disk storage. Normally, a VM needs at least one unique image file for every running instance of a VM. It contains the OS and often the application code and data as well. Much of this is common among similar VMs. In the case of a raw image, a complete copy of the file is needed for each instance. This could require copying multiple gigabytes per instance.
The alternative is to use a based-file format like QEMU's qcow2, which is supported by Linux's KVM virtual-machine manager. In this case, an initial instance of the VM is set up and the operating system is installed possibly with additional applications. The VM is then terminated and the resulting file is used as the base for subsequent qcow2 files.
Setting up one of these subsequent files takes minimal time and space. It can then be used by a new VM, where changes made to the disk are recorded in the new file. Typically, the based file will contain information that will not change in the new file, although doing something like updating the operating system may cause the new file to grow significantly. This masks the original file to the point where the original will not be referenced, since all of its data has been overwritten.
The chain of based files can continue so that there may be a starting image with just the operating system. The next in the chain may add services like a database application. Another might add a web server. Starting up a new instance of a database server would build a new file starting from the image with the database in it, while a web server with database server would start from the database/web-server file.
The use of based files addressed duplication of file storage. For memory deduplication, we need to turn to the hypervisor. Some hypervisors can determine when particular memory blocks are duplicates, such as the underlying OS code assuming two or more VMs use the same OS with the exact same code. This approach can significantly reduce the amount of memory required, depending on the size of shared code or data that can be identified.
An issue with containers is the requirement that the underlying OS be the same for all containers being supported. This is often a happy occurrence for embedded systems in which applications can be planned to use the same OS. Of course, this isn't always the case; this can even be an issue in the cloud. The answer is to run the container system in its own VM. In fact, the management tools can handle this, because a collection of services/containers will often be designed to run on a common container platform.
Finally, there's the idea of thin VMs. These VMs have a minimal OS and run a single application. Many times, the OS forwards most of the service requests, such as file access, to a network server. Stripped-down versions of standard operating systems like Linux are substantially smaller. In the extreme case, the OS support is actually linked into the application so that the VM is just running a single program. For embedded applications, the network communication may be done via shared memory, providing a quick way to communicate with other VMs on the same system.  
No one approach addresses all embedded applications, and there may be more than one reasonable alternative to deploying multiple program instances. It will be more critical to consider the alternatives when designing a system as the world moves from single-core platforms to ones with many, many cores.
source

Thursday, 20 September 2018

WiFi 802.11ax - what is Resource Unit?

A 20 MHz OFDMA channel consists of a total of 256 subcarriers (tones). These tones are grouped into smaller sub-channels, known as resource units (RUs). As shown in Figure 1, when subdividing a 20 MHz channel, an 802.11ax access point designates 26, 52, 106, and 242 subcarrier resource units (RUs), which equates roughly to 2 MHz, 4 MHz, 8 MHz, and 20 MHz channels, respectively. The 802.11ax access point dictates how many RUs are used within a 20 MHz channel, and different combinations can be used.
OFDMA
Figure 1- OFDMA resource units – 20 MHz channel
An 802.11ax AP may allocate the whole channel to only one client
 at a time, or it may partition the OFDMA channel to serve multiple clients simultaneously. For example, an 802.11ax AP could simultaneously communicate with one 802.11ax client using 8 MHz of frequency space while communicating with three additional 802.11ax clients using 4 MHz sub-channels. These simultaneous communications can be either downlink or uplink.
OFDMA
Figure 2 – OFDMA transmissions over time
In the example shown in Figure 2, the 802.11ax AP first simultaneously transmits downlink to 802.11ax clients 1 and 2. The 20 MHz OFDMA channel is effectively partitioned into two sub-channels. Remember that an ODFMA 20 MHz channel has a total of 256 subcarriers; however, the AP simultaneously transmitted to clients 1 and 2 using two different 106-tone resource units. In the second transmission, the AP simultaneously transmits downlink to clients 3, 4, 5, and 6. In this case, the ODFMA channel had to be partitioned into four separate 52-tone sub-channels. In the third transmission, the AP uses a single 242-tone resource unit to transmit downlink to a single client (5). Using a single 242-tone resource unit is effectively using the entire 20 MHz channel. In the fourth transmission, the AP simultaneously transmits downlink to clients 4 and 6 using two 106-tone resource units. In the fifth transmission, the AP once again transmits only downlink to a single client, with a single 242-tone RU utilizing the entire 20 MHZ channel. In the sixth transmission, the AP simultaneously transmits downlink to clients 3, 4, and 6. In this instance, the 20 MHz channel is partitioned into three sub-channels; two 52-tone RUs are used for clients 3 and 4, and a 106-tone RU is used for client 6.
For backward compatibility, 802.11ax radios will still support OFDM. Keep in mind that 802.11 management and control frames will still be transmitted at a basic data rate using OFDM technology that 802.11a/g/n/ac radios can understand. Therefore, the transmission of management and control frames will be transmitted using the standard 64 OFDM subcarriers of an entire primary 20 MHz channel. OFDMA is only for 802.11 data frame exchanges between 802.11ax APs and 802.11ax clients. Please check back every week and read future 802.11ax blogs, where we will discuss in more detail the mechanisms of ODMA including resource unit allocation and trigger frames. We also discuss the differences between downlink OFDMA and uplink OFDMA.

OFDM & OFDMA in context of WiFi 802.11ax technology.

802.11a/g/n/ac radios currently use Orthogonal Frequency Division Multiplexing (OFDM) for single-user transmissions on an 802.11 frequency. 802.11ax radios can utilize orthogonal frequency-division multiple access (OFDMA) which is a multi-user version of the OFDM digital-modulation technology. OFDMA subdivides a channel into smaller frequency allocations, called resource units (RUs). By subdividing the channel, parallel transmissions of small frames to multiple users can happen simultaneously.
Think of OFDMA as a technology that partitions a channel into smaller sub-channels so that simultaneous multiple-user transmissions can occur. For example, a traditional 20 MHz channel might be partitioned into as many as nine smaller sub-channels. Using OFDMA, an 802.11ax AP could simultaneously transmit small frames to nine 802.11ax clients. OFDMA is a much more efficient use of the medium for smaller frames. The simultaneous transmission cuts down on excessive overhead at the MAC sublayer as well as medium contention overhead. The goal of OFDMA is better use of the available frequency space. OFDMA technology has been time-tested with other RF communications. For example, OFDMA is used for downlink LTE cellular radio communication.
To illustrate the difference between OFDM and OFDMA, please reference both Figures 1 and 2. When an 802.11n/ac AP transmits downlink to 802.11n/ac clients on an OFDM channel, the entire frequency space of the channel is used for each independent downlink transmission. In the example shown in Figure 1, the AP transmits to six clients independently over time. All 64 subcarriers are used when an OFDM radio transmits on a 20 MHz channel. In other words, the entire 20 MHz channel is needed for the downlink communication between the AP and a single OFDM client. The same holds true for any uplink transmission from a single 802.11n/ac client to the 802.11n/ac AP. The entire 20 MHz OFDM channel is needed for the client transmission to the AP.
Figure 1- OFDM transmissions over time
 As shown in Figure 2, an 802.11ax AP can partition a 20 MHz OFDMA channel into smaller sub-channels for multiple clients on a continuous basis for simultaneous downlink transmissions. In a future blog, you will learn that an 802.11ax AP can also synchronize 802.11ax clients for simultaneous uplink transmissions. It should be noted that the rules of medium contention still apply. The AP still has to compete against legacy 802.11 stations for a transmission opportunity (TXOP). Once the AP has a TXOP, the AP is then in control of up to nine 802.11ax client stations for either downlink or uplink transmissions. The number of resource units (RUs used can vary on a per TXOP basis.
Figure 2- OFDMA transmissions over time

Friday, 14 September 2018

AT&T's CBRS perspectives



CBRS spectrum has been a hot topic in the wireless industry for some time. Sometimes referred to as the "Innovation Band", there has been a lot of discussion on how operators will use this spectrum and what kind of benefits customers will see.

What is CBRS and "Shared Spectrum?"

CBRS has been defined by the FCC as a shared spectrum band (3.55-3.7 GHz) to enable efficient use of finite spectrum resources. With shared spectrum, an operator isn't required to buy and own spectrum. Spectrum can be either unlicensed or an operator could purchase a "temporary" license that would be valid for a certain period of time. This new "shared" ownership concept will allow operators to deploy services over this band quicker and more efficiently. 

However, there are some additional considerations to take into account when it comes to CBRS spectrum. Since it is a shared spectrum band with the incumbent users like the military– it requires a spectrum sharing structure called a Spectrum Access System (SAS).  SAS providers will allow carriers to use the 3.5 GHz band without creating interference. Using a SAS will let the incumbents use the 3.5 GHz band when needed, but then frees the spectrum for commercial use at all other times.

How will AT&T use CBRS?

We've been researching and considering the best use cases for CBRS for some time. We think there are a number of compelling use cases for this spectrum including private LTE networks for enterprises, utilizing it as a "neutral host band" in large indoor venues like stadiums and smart factories, and deploy fixed wireless internet (FWI) in rural and underserved areas. In the future, we'll look to migrate to 5G over CBRS spectrum which can potentially be used for 5G densification in urban areas.  

To meet our rapidly growing demand for FWI, today we announced we are looking at initially deploying Next Generation Fixed Wireless Internet using the CBRS spectrum band. This innovative spectrum band will allow us to meet our fixed wireless expansion commitments and deliver an internet connection to more Americans in rural and underserved communities.

We plan to primarily use the CBRS solution to deliver home and enterprise broadband services in suburban and rural locations. Millions of US households lack access to broadband service, and in many cases a fixed wireless access architecture can cost-effectively reach homes and businesses where fiber cannot.

Additionally, the CBRS solution we're planning to deploy uses a Massive MIMO architecture to enable faster downloads, greater network capacity and an enhanced wireless experience.

What's next for CBRS?

While the use cases for CBRS spectrum are exciting, there are still a number of things that need to happen before we can start commercial deployments. Our next step is to start testing CBRS equipment in the lab late this year.  While deployments may proceed with General Authorized Access (GAA) commercial operation in 2019, we're looking forward to participating in the Priority Access License (PAL) auction.  We look forward to the final FCC structure and participating in auction at the appropriate time.  

AT&T Edge Computing Test Zone: What's the learning so far and whats to look ahead?



In early 2018, the AT&T Foundry launched an edge computing test zone in Palo Alto, CA to experiment with emerging applications upon this new network infrastructure paradigm. Edge computing is the act of moving storage and processing capabilities to the perimeter of the network – or geographically closer to the end-user. 

At&T said about the challenges appeared, as next-gen applications require increasing processing power on mobile devices, our team engaged with Silicon Valley companies facing challenges that edge computing could potentially address. 

We first turn our focus to media applications such as augmented reality, virtual reality, and cloud-driven gaming. We've now completed our first experiment with GridRaster. The goal of this phase of our collaboration was to quantitatively understand how improved network performance metrics, such as delay and packet jitter, would translate to improvements in application performance metrics, such as motion-to-photon-latency and frame loss – yielding a better experience for the end user.

As anticipated, the edge configuration presented the most favorable outcomes for application performance. However, our experimentation uncovered additional nuances. We believe that network optimization is critical to enable mobile, cloud-based immersive media. But that's not enough. First, we believe companies in this ecosystem need to streamline functions throughout the entire capture and rendering pipeline and devise new techniques to distribute functions between the cloud and mobile devices. Second, we discovered that the most notable benefits of edge computing come from delay predictability, rather than the amount of delay itself. We therefore believe that cloud-based immersive media applications will likely benefit from network functions and applications working more synergistically in real-time.

Based on these insights, the AT&T Foundry plans to take a deeper dive with application developers to re-imagine and re-architect how these immersive media applications are designed and implemented.

We'll be collaborating with NVIDIA to experiment with new ways of delivering experiences over 5G and edge computing technology. At AT&T Spark, the AT&T Foundry, Ericsson and NVIDIA will unveil a GeForce NOW edge computing demo that will showcase Shadow of the Tomb Raider using the power of cloud gaming over a 5G network. It's a glimpse at what the future holds and a prime example of what we can achieve through this collaboration.

In that collaborative vein, AT&T also recently launched Akraino Edge Stack via the Linux Foundation. Akraino is an open source software stack that supports high-availability cloud services optimized for edge computing systems and applications. The project recently moved from formation into execution and has over a dozen members. AT&T Spark will also host a demo of Akraino running a VR experience and leveraging artificial intelligence.                                                                              

The AT&T Foundry is also expanding its edge test zone footprint to cover the full Bay Area, allowing for increased application mobility and broader collaboration potential. We will continue to evaluate potential use cases from prospective ecosystem partners that could benefit from edge computing.  This includes continuing to find ways of enhancing mobile immersive media experiences, as well as testing future 5G applications such as self-driving cars.

The AT&T Foundry's rapid innovation model allows us to pivot based on key learnings and better direct our testing to get to the core of how edge computing can provide tangible benefits and value to current and future use cases. With 5G powering the next evolution of our network, edge computing will continue to be at the forefront of how we provide the ultimate user experience for next-gen applications. 

 

Wednesday, 12 September 2018

5G NR: Massive MIMO and Beamforming, measure it in the field - by keysight Solution director.


SU-MIMO vs. MU-MIMO

In legacy LTE, the term MIMO usually refers to Single User MIMO (SU-MIMO). In Single User MIMO, both the base station and UE have multiple antenna ports and antennas, and multiple data streams are transmitted simultaneously to the UE using same time/frequency resources, doubling (2×2 MIMO), or quadrupling (4×4 MIMO) the peak throughput of a single user.

In MU-MIMO, base station sends multiple data streams, one per UE, using the same time-frequency resources. Hence, MU-MIMO increases the total cell throughput, i.e. cell capacity. The base station has multiple antenna ports, as many as there are UEs receiving data simultaneously, and one antenna port is needed in each UE.

Massive MIMO

The most commonly seen definition is that mMIMO is a system where the number of antennas exceeds the number of users. In practice, massive means there are 32 or more logical antenna ports in the base station It is expected that NEMs will start with a maximum of 64 logical antenna ports in 5G.

In MU-MIMO/mMIMO, the base station applies distinct precoding for the data stream of each UE where the location of the UE, as well as the location of all the other UEs, are taken into account to optimize the signal for target UE and at the same time minimize interference to the other UEs. To do this, the base station needs to know how the downlink radio channel looks like for each of the UEs. 

Beamforming – principle of operation

Terms beamforming and mMIMO are sometimes used interchangeably. One way to put it is that beamforming is used in mMIMO, or beamforming is a subset of mMIMO. In general, beamforming uses multiple antennas to control the direction of a wave-front by appropriately weighting the magnitude and phase of individual antenna signals in an array of multiple antennas. That is, the same signal is sent from multiple antennas that have sufficient space between them (at least ½ wavelength). In any given location, the receiver will thus receive multiple copies of the same signal. Depending on the location of the receiver, the signals may be in opposite phases, destructively averaging each other out, or constructively sum up if the different copies are in the same phase, or anything in between. 

Beam-based coverage measurements in 5G

The coverage is beam-based in 5G, not cell based. There is no cell-level reference channel from where the coverage of the cell could be measured. Instead, each cell has one or multiple Synchronization Signal Block Beam (SSB) beams, SSB beams are static, or semi-static, always pointing to the same direction. They form a grid of beams covering the whole cell area. The UE searches for and measure the beams, maintaining a set of candidate beams. The candidate set of beams may contain beams from multiple cells. The metrics measured are SS-RSRP, SS-RSRQ, and SS-SINR for each beam. Physical Cell ID (PCI) and beam ID are the identifications separating beams from each other. In field measurements, these metrics can be collected both with scanning receivers and test UEs. Hence, SSB beams show up as kind of new layer of mini-cells inside each cell in the field measurements. 

Get full Article From Keysight HERE


Sunday, 9 September 2018

Saturday, 8 September 2018

Gartner says, Next Generation Network Firewall market will rise 2X and rely on cloud based CASB







"Next generation" capabilities have been achieved by all products in the enterprise network firewall market, and vendors differentiate on feature strengths. Security and risk management leaders must consider the trade-offs between best-of-breed enterprise network firewall functions and cost.



Strategic Planning Assumptions

Virtualized versions of enterprise network firewalls will reach 10% of market revenue by year-end 2020, up from less than 5% today.
By year-end 2020, 25% of new firewalls sold will include integration with a cloud-based cloud access security broker (CASB), primarily connected through APIs.
By 2020, 50% of new enterprise firewalls deployed will be used for outbound TLS inspection, up from less than 10% today.
Get the Report HERE



Friday, 7 September 2018

IoT- An Evolution from M2M and the Notion for Everything Connected


The notion of IoT emerged from the earlier work on machine to machine communication or in short M2M. M2M was the beginning to start taking data from devices/sensors and feeding it to internet through M2M gateways, for data processing servers etc. This is known as M2M because there is no human intervention required right from the point of data generations to the point of data processing.
As at time GSMA chief strategy officer Mr Hyunmi Yang, said ""Mobile networks are the platform upon which the M2M industry is being built and mobile operators are at the forefront in shaping the new business models that are driving this exciting market forward,".


Essentially it was about a network of devices, called 'capillary network' connected to ISPs or WAN/MAN through a M2M gateway. The whole enablement of IT is managed through remote connectivity.  
Find the full Article here with ElectronicsForYou