Storageless Data – agility at its finest

Detach your data from bare-metal machines with a global file system from Hammerspace.

Storageless Data – agility at its finest

Detach your data from bare-metal machines with a global file system from Hammerspace.

We should write the word agility everywhere to remind ourselves how crucial it is for businesses these days. It is tough to build a big and powerful organization; it takes time to find people you trust with the skillset, knowledge, and experience that you need. But even if you do it, constant market changes force you to react fast to succeed! The road to success is built on a good team and a fast, precise reaction.
IT can always help with the speed of things if the approach is correct. It can be a jet engine speeding up your business if you choose the right way to attach it. With the right approach, IT can deliver the agility that your business needs to answer the market’s demands. We always think what the one thing that slows us down the most is. For data, the machine is the culprit that holds data down, making it attached to the bare-metal storage machine, trapping it on the specific location. Moving the from If we could imagine data without storage, detach it from bare-metal devices, we could imagine sky-high speed and agility for your business.


Of course, we are far from having data stored on anything other than bare-metal storage drives, but a new term, storageless data, is something the industry has recently started to use. There is a lot of ambiguity in the term; some consider it a marketing hype while others consider it a real thing. The contradiction in terms originates from the inability of the data to exist outside of the storage. So the question is: what is storageless data then?
Using the term storageless doesn’t mean that your data doesn’t live on storage. Ultimately, compute must run on servers, and data must live on a bare-metal storage device. As far as serverless computing goes, the term’s contradiction emphasizes that we do not need to care about the mapping-to-server if we want to get our jobs done. In the same way, the term storageless data is a slap in a face contradiction that tells that even if I need to work with my data, I don’t want to think about the underlying storage infrastructure. It is a consumer-centric approach because it puts in front the perspective of the users who works with the data, rather than the perspective of the IT operative who works with infrastructure. With Hammerspace, you will get your job done without the need to think about mapping to storage and the underlying infrastructure. Essentially, this concept can be described as data as a service.

Of course, there is a lot more to Hammerspace than ultimate data agility. It overcomes the siloed nature of hybrid cloud storage and delivers global data access, enables the use of any storage in hybrid cloud infrastructures, and is built for high performance. Hammerspace is a global file system that grants you access to your data from any cloud and across any infrastructure. It serves, manages, and protects data on any infrastructure… anywhere. Ultimately, Hammerspace is modernizing data workflows so they can move from IT-centric to business-centric data.

The HAMMERSPACE advantage

1. Cost profiling

Hammerspace platform constantly monitors available storage infrastructure and data behavior to predict the cost of different tiering scenarios, whether on-premises or in the cloud. It provides you with valuable and accurate information that allows you to concentrate on your infrastructure costs, allowing you to make informed business decisions.


2. Business objectives oriented automation

Now it is possible to teach your infrastructure about the nature of your business. With Hammerspace, you can define your business objectives, and the software will create extensible user-defined metadata that will help the machine learning mechanism to tier and automate the data across storage, sites, and clouds in the best way for your business.


3. Multi-site, multi-cloud

Universal global namespace, virtualized and replicated at a file-level granularity, enables active-active data accessibility across all sites. It allows access to your data from any cloud and across any infrastructure.


4. Data management at a file-level

Hammerspace enables you to manage your data to the level of a particular file. But the real benefit of this technology is that file granular data management is the only way to efficiently scale across complex mixed infrastructure without making unnecessary copies of entire data-volumes.


5. No disruptions

There are two aspects of the non-disruptiveness of Hammerspace. Zero-downtime assimilation that quickly brings data online and live data mobility technology that eliminates migration disruptions. Those two technologies combined make your data highly accessible and secure at any time.


6. High performance

Hammerspace delivers high-performance across hybrid clouds by simplifying performance and capacity planning. The planning is simplified through parallel and scale-out filesystem with direct data access.


Aside from mentioned features and advantages, Hammerspace offers a long list of useful features:

1. Native Kubernetes Support
2. Share-level snapshots
3. Undelete
4. Data replication
5. Real-time analytics
6. Support for NFS, SMB, and S3
7. Global dedupe & compression
8. WORM data-lock
9. Data-in-place assimilation
10. Data virtualization
11. Programmable REST API
12. Kubernetes CSI driver
13. Third-party KMS integration, etc.,…

We recommend you to check out this awesome video of Hammerspce CEO David Flynn explaining the concept of storageless data. Braineeing engineers are happy to work directly with Hammersapce to ensure that the infrastructure you receive is custom-tailored for your business. If you have any additional questions, want our engineers to estimate the state of your infrastructure and its compatibility with Hammerspace technology, or just want to chat, feel free to call us!

Пре него што наставите…
Претплатите се на наш месечни билтен и будите у току са свим што се дешава у индустрији!

How Desktop Virtualization Works II

End-User Computing – Simple and Secure

How Desktop Virtualization Works II

End-User Computing – Simple and Secure

VDI Access

Users access VDI with different types of devices:

  • Thin or zero clients
  • Mobile devices (smartphones and tablets)
  • standard PC platforms (Windows, macOS, Linux)

If clients are outside of the corporate network, using WAN, secure access is provided by an additional component – Unified Access Gateway (UAG).

User authentication is done through Active Directory integration, including additional security features such as Single-Sign-On (SSO) and Two-Factor-Authentication.


Figure 1. LAN access


Figure 2. WAN access


Figure 3. Various client devices


Thin/Zero clients


Thin and zero clients are designed for VDI, reliable and straightforward, with low power consumption. They also have a small footprint, which reduces space requirements. These clients are cheaper than standard desktops or laptops, with minimum maintenance required.

  • Zero Clients – contain no operating system, local disk, CPU, or memory resources. With only a PCoIP chip installed, they are extremely energy efficient and easy to administer. No data is ever stored on the device, which makes them suitable for high-security environments. Some of them are configured for specific protocols only, which could be a problem, especially in large environments. Besides, the configuration and use of USB devices can be complicated in some cases.
  • Thin Clients – contain an operating system, disk, CPU, and memory resources. It brings more capabilities but also more challenges in both hardware and software maintenance. These clients support VPN connections and a variety of USB devices.

Optimal device choice depends on many parameters, including the type of work, financials, and overall VDI environment. Some of the crucial factors are:

  • protocol (PCoIP, Blast, etc.)
  • Wi-Fi connectivity
  • VPN support
  • VoIP support
  • maximum resolution and number of monitors
  • graphical processing capabilities
  • security features
  • number and type of ports
  • centralized management capabilities
  • ease of configuration


Mobile devices and standard PC platforms


Users access VDI using Horizon Client software or browser if client installation is not possible (VMware Horizon HTML Access).

Standard PC platforms provide outstanding performance, but that comes with higher costs and more complicated maintenance. One way to lower costs is repurposing older devices at the end of their lifecycle. Both standard platforms and mobile devices are an excellent choice for remote user’s access to corporate VDI.


User profile management


All user environments, huge ones, fully benefit from VDI implementation if the whole process is automated as much as possible. It means the resources are dynamically assigned as needed, at the right point in time, with minimum static, pre-allocated workload capacities. The user logs in and gets the first available virtual machine, which can be different each time. It raises the question of user’s specific data and application settings management.

There are several ways to manage user profiles, depending on specific VDI implementation, Horizon 7 edition, and licensing model:

  • VMware Dynamic Environment Manager (DEM)
  • VMware Persona Management
  • VMware App Volumes Writable Volumes
  • Microsoft FSLogix

Profile management is done through Active Directory integration, using group policies and dedicated administrative templates for Horizon 7. A newer version of DEM can work without AD.


VMware Dynamic Environment Manager (DEM)


Specific settings are kept on the application level rather than complete profile, which provides better granular control. Configurations are kept in separate .zip files for each application (Figure 4). This way, they can be applied on various operating systems, unlike most standard solutions tied to a specific OS. Horizon 7 Enterprise edition is required.



Figure 4. Configuration files (DEM)


VMware Persona Management


This solution keeps the entire user profile, similar to standard Microsoft Roaming Profile solutions. It is available in all Horizon 7 editions, but it doesn’t support RDSH agents and newer versions of Windows 10.


VMware App Volumes – Writable Volumes


Profiles are kept on separate virtual disks and attached to various virtual machines, as needed. Horizon 7 Enterprise edition is required and separate infrastructure for App Volumes (servers, agents, etc.). Virtual disks are in standard .vmdk format, which eases their administration and data backup/recovery. App volumes can be combined with DEM to get a wide range of profile management options.


Microsoft FSLogix


This solution is handy for users without Horizon 7 Enterprise edition who can’t use advanced VMware profile management features. Profiles are kept on network share in VHD(X) format and added to VMs as virtual disks. This way, profile content is not copied at log on, which often caused significant start-up delays. Besides, there are several more optimization features:

  • Filter Driver is used for redirection, so applications see the profile as it was on the local disk; this is important because many applications don’t work well with profiles located on network drives
  • Cloud Cache technology enables part of user data to be stored on local disk and multiple network paths for profiles to be defined; this increases redundancy and availability in case of an outage
  • Application Masking can efficiently control resources based on the number of parameters (e.g., username, address range).

Both 32-bit and 64-bit architecture is supported, including all OS starting from Windows 7 and Windows Server 2008 R2. It is available for all users with any of the following licenses:

  • Microsoft 365 E3/E5
  • Microsoft 365 A3/A5/ Student Use Benefits
  • Microsoft 365 F1
  • Microsoft 365 Business
  • Windows 10 Enterprise E3/E5
  • Windows 10 Education A3/A5
  • Windows 10 VDA per user
  • Remote Desktop Services (RDS) Client Access License (CAL)
  • Remote Desktop Services (RDS) Subscriber Access License (SAL)


Advanced VDI solutions – Teradici PCoIP Remote Workstation


Global data growth requires more and more resources for fast and reliable data processing. Some specific business areas also require very intensive calculations and simulations, as well as complex graphical processing. Standard VDI solutions can’t cope with these demands, and usually, that kind of processing is not moved outside the data centers. On the other hand, many companies need their employees to access corporate resources from any place, at any time.

It can be handled by keeping all processes inside data centers and only transferring display information (in the form of pixels) to remote clients, using the Teradici PCoIP Remote Workstation solution (Figure 5). It is composed of three main components:

  • remote workstation host
  • remote workstation client



Figure 5. Teradici PCoIP Remote Workstation solution


The host can be any standard Windows or Linux platform which does the data processing. The host’s display information is then processed on pixel level by specific PCoIP techniques, encrypted, and sent over a network to the client. The host must have the following components installed:

  • Graphical card (GPU)
  • PCoIP Remote Workstation Card – receives data from GPU and does pixel-level processing, compression, and encoding. This component has three main types, depending on specific requirements and host configuration (Figure 6).



Figure 6. PCoIP Remote Workstation Card


Due to various display information types (text, images, video, etc.), special algorithms are used to recognize each type and apply appropriate compression methods. Moreover, the compression ratio can be adjusted to network fluctuations.

Image from the host is decompressed and displayed on the client side. Clients can be standard PC platforms (desktop/laptop) or dedicated devices (thin/zero clients), with 4 displays maximum, depending on the resolution.

Regardless of client type, security is at a very high level because data never leaves the data center – only encrypted pixels are transmitted. The use of dedicated devices, such as zero clients, additionally decreases the risk of potential attacks and data loss.




As mentioned, every infrastructure is unique, and each implementation depends on many factors. However, some typical scenarios can be used for approximate resource planning and calculation.


Scenario 1. Small and medium environments


The basic option assumes infrastructure for 50 users, scalable up to 200 virtual machines by adding hardware resources and appropriate licenses.

Licensing model is based on Horizon 7 Advanced Add-on (Named/CCU) with separate licensing for vSAN, vSphere and vCenter.

Virtual desktops are created as linked clones which significantly reduces the disk space and eases administration. User data are kept on a network share, with 100 GB per user allocation.

Compute resources consist of 4 hosts in the vSAN cluster with RAID-5 configuration. ESXi operating system is installed on separate M2 disks with RAID-1 protection. Table 1 shows approximate calculation details for the vSAN cluster, and Table 2 shows the host specifications. Licenses are defined in Table 3.



Table 1. vSAN cluster calculation (50 VMs)



Table 2. Host specifications (50 VMs)



Table 3. Licenses (50 VMs)


Scenario 2. Large environments


Besides additional hardware resources, large infrastructures usually need extra features for management, control, and integration. In addition, a certain level of automation is desirable.

This scenario is based on the following presumptions:

  • The number of users is 200, with a possible scale-up to 500
  • Up to 100 GB of data per user
  • Ability to use RDS Published applications
  • Ability to virtualize applications with App Volumes
  • Ability to manage user profiles

The features mentioned above require Horizon 7 Enterprise edition, including vSAN, vSphere, and vCenter licenses. Besides, it enables instant clones for VM deployment, which significantly increases system agility and VM creation speed (compared to linked clones). Licensing model can be both Named or CCU.

User profile management can be done using Writable Volumes – virtual disks assigned to every user, containing all installed applications, data, and specific settings. These disks are attached to VM during logon, so the user profile is always available, regardless of VM assigned. Combined with VMware Dynamic Environment Manager, it can offer a high level of granularity in data and profile management.

The servers used are the same as for Scenario 1, with additional hardware resources installed. All details are listed in Tables 4, 5, and 6.



Table 4. vSAN cluster calculation (200 VMs)



Table 5. Host specifications (200 VMs)



Table 6. Licenses (200 VMs)



Пре него што наставите…
Претплатите се на наш месечни билтен и будите у току са свим што се дешава у индустрији!

Software-Defined Disaster Avoidance – The Proper Way

VMware vSAN metro cluster implementation

Software-Defined Disaster Avoidance – The Proper Way

VMware vSAN metro cluster implementation

In my first blog post, the topic I write about, Software Defined Disaster Avoidance – The Proper Way, is a story that we at Braineering have successfully turned into reality twice so far. The two stories have different participants (clients), but they both face the same fundamental challenges. The stories occur in two distinct periods, the first in 2019 and the second one in 2020.




Both clients are from Novi Sad and belong to the public sector. Both provide IT products to many public services, administrations, and bodies without which life in Novi Sad would not run smoothly. Over 3000+ users use IT products and services hosted in their Datacenters daily. Business applications such as Microsoft Exchange, SharePoint, Lync, MS SQL, and Oracle are just some of the 400+ virtual servers that their IT staff takes care of, maintains, or develops daily.


Key Challenges


At the time, both users’ IT infrastructure was more or less the standard IT infrastructure we see in most clients. It consists of a primary Datacenter and a Disaster Recovery Datacenter located at another physically remote location.

The primary and DR site is characterized by traditional 3-Tier architecture (compute, storage, and network) Figure 1.

The hardware located on the DR site usually operates using more modest resources and older generation equipment than the primary site. It is replicated to a smaller number of the most critical virtual servers. Both clients had storage base replication between Datacenters, and VMware SRM was used to solve automatic recovery.


Figure 1.


Even though the clients are different, they had common vital challenges:

  • Legacy hardware 
    • different server generations: G7, G8, G9
    • storage systems End of service life.


  • Inability to keep up with the latest versions because of legacy hardware.
    • vSphere
    • VMware SRM
    • Storage OS or microcode


  • Weak and, for today’s standards, modest performance
    • 8 Gb SAN, 1Gb LAN
    • Slow storage system disks (SATA and SAS)
    • Storage system fragmentation
    • vCPU: pCPU ratio


  • Expensive maintenance – again due to legacy hardware
    • Refurbished disks
    • EOSL (End of service life), EOGS (End of general support)


  • Limited scalability, the expansion of CPU, memory, or storage resources


When new projects and daily dynamic user requests to upgrade existing applications are also considered, both clients were aware that something urgent needed to be done about this issue.


Requirements for the Future Solution


The future solution was required to be performant, with low latency and easy scaling with high availability. The goal is to reduce any unavailability to a minimum with low RTO and RPO. The future solution must also be simple to maintain, and migration, i.e., switching to it, should be as painless as possible. If possible, remove/reduce overprovisioning and long-term planning and doubts when the need for expansion (by adding new resources) arises.

And, of course, the future solution must support all those business and in-house applications hosted on the previous IT infrastructure.


The Chosen Solution


After considering different options and solutions, both users eventually opted for VMware vSAN. As the user opted for vSAN as their future solution in both cases, we at Braineering IT Solutions suggested vSAN in the Stretched Cluster configuration to maximize all the potential and benefits that such a configuration brings. To our delight, both users accepted our proposal.


Figure 2.


Stretched Cluster


What is the vSAN Stretched Cluster? It is an HCI cluster that stretches between two distant locations. (Figure 2)

The chosen future solution fully meets all the requirements mentioned above: it supports all clients’ business and in-house applications. In the All-Flash vSAN configuration, they can deliver a vast number of low-latency IOPS. The Scale-Up and Scale-Out architecture allow you to quickly expand resources by adding additional resources to existing nodes (Scale-Up) or adding new nodes to the cluster (Scale-Out).

It is easy to manage; everything is operated from a single vCenter. Existing backup and DR software solutions are supported and work seamlessly. And finally, as the most significant benefit of vSAN in the Stretched Cluster configuration, we have disaster avoidance and planned maintenance.


The benefits of the vSAN Stretched Cluster configuration are:

  • Site-level high availability to maintain business continuity.
  • Disaster avoidance and planned maintenance
  • Virtual server mobility and load-balancing between sites
  • Active-Active Datacenter
  • Easy to manage – a single vSphere vCenter.
  • Automatic recovery in the case of one of the sites’ unavailability
  • Simple and faster implementation compared to the Stretched cluster of traditional storage systems.


The Advantages of the Implemented Solution


The most important advantages:

New servers: The possibility of tracking new versions of VMware platform solutions. A better degree of consolidation and faster execution of virtual machines

Network 10Gbps: 10Gbps datacenter network infrastructure raises network communications to a higher level and degree of speed.

HCI: Scale-out platform, infrastructure growth by adding nodes. Compute, network, and storage resources are converted into building blocks. Replacement of existing storage systems with the vSAN platform in All-flash configuration.

SDDC: A platform that introduces new solutions such as network virtualization, automation systems, day-two operations …

DR site: New DR site dislocated to a third remote location. It is retaining existing VMware SRM and vSphere Replication technology.

Saving: Consolidation of all VMware licenses, consolidation of hardware maintenance of new equipment. Savings have been achieved in the maintenance of old HW systems.

Stretched cluster: Disaster-avoidance system that protects services and data, and recovers with automated procedures, even in a complete site failure scenario


The End Solution


Today’s IT infrastructure for both clients is shown in Figure 3.


Figure 3.


The preferred site and Secondary site, the Active-Active cluster, use one common stretched vSAN datastore. All I/O operations on this stretched datastore are synchronized. VMware Replication replicates the 25 most critical virtual servers on the DR site, and that replication is asynchronous. For automated and orchestrated recovery on the DR site in the event of a disaster on a stretched cluster, both users retained the solutions they had previously implemented, VMware SRM.








Пре него што наставите…
Претплатите се на наш месечни билтен и будите у току са свим што се дешава у индустрији!