Today’s Demand for the Cloud
Cloud computing is an elastic platform allowing users to increase or decrease computing needs on demand. We no longer need to predict future infrastructure needs, concern ourselves with warranty cycles, build data centers, or invest in reserve capacity. Now we need to focus on items specific for cloud computing like system performance, calculating the correct VM size to deploy so that we only pay for what we need, identifying migration candidates, and how to leverage features that can accelerate planning a cloud migration without risking performance due to the learning curve a new computing platform presents to all users.
Landscape – Discovery
Movere provides a unique take on traditional “discovery”. Many platforms discover data based on specific queries or directed explorations in order to gather data and deliver it back to the user. However, Movere is undirected. Movere has the ability to explore and detect a business’s environment the same way that a doctor may inject dye into a patient’s blood in order to view veins within their body. Movere is unguided and self-discovering, so it is able to detect information that a business may not have even been aware of. It is agnostic to geo location, domain, platform, hardware, etc. and is able to relay back comprehensive data.
Getting to the Cloud
Movere takes data and allows the user to create beautiful, intelligent methods for segmenting the data to make comprehensive and informative reporting. The raw data Movere collects including Active Directory, vCenter, VMM, Hyper-V, XenServer, SCCM, Altiris, LANDesk, LanSweeper, BigFix, SharePoint, SCOM, SCDPM, Exchange, Lync, etc.* and is integrated and analyzed automatically. There is no human interaction with the source data from its collection through to its presentation on the Movere website. Custom profiles provide customers with the ability to group devices, assign licensing vehicles, or even exclude devices and users from the analysis entirely.
A few examples of ways users can use the raw data include:
– Location prioritization (closing a data center)
– Platform prioritization (retiring older version tech ESX 4.0)
– Hardware and processor aging
– Continuing supporting agreements
IT data is complex and represents many different elements and interconnections like hypervisors, operating systems, applications, users, consumption, domains, subnets etc. We capture and present the entire picture then you decide where the frame(s) go. The frames are persistent, meaning you can create it once and use again and again.
Identifying Need vs. Actual and Proving TCO
Movere takes incredibly complex and expansive data and has the ability to transform it into valuable business insight. Movere can capture Windows and Linux actual resource utilization, calculate the performance needed to satisfy peak demands, then identify the cloud VM profile that will most economically meet that need.
Administrators use performance data to better understand how their systems are performing and identify where bottlenecks are occurring, i.e. insufficient memory, low disk IOPS, etc. Managing servers is a never ending resource management exercise with administrators constantly tasked with improving performance, redundancy and up-time, while at the same time lowering costs. With performance and cost always being at odds, administrators regularly find themselves fighting to hold the dam of improving the individual performance of each of the items above; in step with one another and within budget.
Upgrading, Consolidation and Site Recovery:
Movere identifies which systems can be upgraded, which systems are outside of extended support potentially invalidating support contracts with other vendors, which systems are no longer even being used, which ones have levels of usage so low that consolidating several devices into one would have little if any impact of performance, which ones could make use of additional resources, and which ones could leverage features like Azure Site Recovery.
Sizing and Planning a Cloud Migration:
A big impediment to moving into the cloud is deciding on the size of the device to build to avoid paying for resources that aren’t being utilized. This was a lower priority with on-premises implementations as infrastructure was viewed as a sunk cost. Plus, over-provisioning memory or processing power was not deemed to have an incremental cost versus cloud computing where the clock, in terms of cost, is always ticking and over provisioning is akin to pouring money down the drain. The problem is, just like finance and sales systems of the 90’s, assumptions in terms of pricing, inventory volumes and distribution needed to be made based on human assumptions as operational data could not be delivered to end users prior to decisions needing to be made.
The same phenomenon affects us today when we consider moving part or all of our infrastructure to the cloud, what size(s) should we use, what region should they be built in, what dependencies exist that could break applications if they are no longer available.
Historically, organizations have simply gone out and purchased a new physical host when out of resources, or worse still, massively over purchased upfront to achieve a volume discount. Spinning up servers produces over provisioning, which then finds its way to the cloud when inventories occur for the on-premises footprint and gets moved to the cloud, which will drive up costs. So:
1) do we still need the system
2) do we need to build it that big in the cloud if we do need it
3) decide to move it and finally who keeps an eye on things once they are in the cloud
Potentially, 25% of the customer’s server footprint is completely unused and could be retired immediately. For example, we needed to spin up just one more server, so we looked at everything else running on the host and found two servers we didn’t even need. We powered them off and had the capacity we needed. In the cloud, this won’t exist. If customers are periodically monitoring usage then servers that are no longer needed could sit in the cloud driving costs up indefinitely. This is the need Movere fills once in the cloud.
IOPS and Throughput
Businesses have the continued struggle of ever-evolving updates and releases. Especially when it comes to cloud providers that are releasing new sizes, there is always a need for a re-done analysis. This is especially true as far as how IOPS and throughput are factored into VM profiling.
Movere captures the amount of data in MB sent and received from the device between ARCBeats. There is a maximum aggregated bandwidth allocated and assigned to each VM type in Azure so an understanding of the levels of throughput is essential when selecting the right VM type to ensure adequate network capacity is available.
Being a SaaS solution, the profiling on Movere is always up-to-date. As soon as Azure offers new VM sizes, Intel/AMD releases new chips, or software vendors make new versions available, Movere will update its Azure VM size and storage requirements.
Movere uses two data points to differentiate between Standard and Premium storage needs:
Maximum input/output operations per second (IOPS): The maximum number of reads AND writes to non-contiguous storage locations between ARCBeats. The higher the number, the faster the device is able to read AND write data to and from disk in a single operation.
Maximum Throughput: The maximum amount of data in MB sent AND received from a device between ARCBeats. The higher the number, the greater the devices bandwidth needs. This impacts cloud sizings as there is a maximum aggregate bandwidth allocated and assigned to each VM type. An understanding of maximum throughput will help make sure adequate network capacity is available.
The ARC collects disk performance data for both local and network attached mounted storage. Movere uses this data to calculate IOPS and throughput, then maps these results to Microsoft’s Premium Storage Disk Limits.
Movere shows device and license counts geographically. It can also be used to pinpoint allocation usage, show devices using non-standard or unsupported software, and more. Movere can then group systems by geography, assess device and license count by country and confirm the subnet range(s) in use at each location.
ASR – Azure Site Recovery
Movere can identify systems that are already Azure-ready as well as those that can leverage features such as Azure Site Recovery.
Each inventoried server is benchmarked using several inputs including: CPU and Core counts (accurate even for Windows Server 2000, 2003 and 2003 R2), performance data for each Intel/AMD chip on the market (updated monthly), capacity calculations specific to platform i.e. physical or virtual, RAM, Disk and Network interfaces etc. Using these inputs, Movere benchmarks every VM size available in Azure (A0 through G5). Using the same measuring stick, Movere compares the specs of each on-premises server to the sizes available in Azure and recommends the closest match. This is done in real-time without any input from the user.
Application Dependency Mapping
One of the biggest impediments of moving Windows and Linux servers to the cloud is understanding application dependencies, especially those that are non-persistent. Each time Movere collects consumption data, it captures the connections (across all possible states) to each device. Each connection is tied to a process and installation path so that each connection can be mapped back to the application triggering it and the user behind that connection. Movere also captures CPU and memory consumption down to the process level so that it can see exactly what type of load that connection/application is driving. If we had no awareness of these dependencies then we would have no way of knowing what will start breaking as our cloud migration unfolds.