Secure Your Business with Next-Generation Firewall

Palo Alto – Next-Generation Firewall first steps configuration guide

Secure Your Business with Next-Generation Firewall

Palo Alto – Next-Generation Firewall first steps configuration guide
You installed a Palo Alto firewall in your infrastructure, started the device, and asked yourself: Okay, now what? No problem. The Palo Alto user interface is amongst the easiest to learn and navigate. Below, I will do my best to make it easier for you to continue implementing so that your firewall is best optimized for your infrastructure. To begin with, I will set the key points specific to each Palo Alto firewall:
  1. Start your firewall and follow the start, reset, shutdown sequences
  2. Register your Palo Alto firewall
  3. License your Palo Alto firewall.
  4. Initial setting
  5. Update
  6. Administration
1. Starting

It does not matter if it is a physical or virtual firewall. It is a perfect practice to save time and eliminate potential setbacks if the firewall, for some reason, crashes into some reboot loops, throws out database errors, etc. With great confidence, engineers often think that the new device will boot without problems. Hence, they put it into production immediately, and after the first update and restart of the request, the question comes: Has the device reloaded been tested? The start, resume, and shutdown sequences are best tracked through the CLI, where potential errors will be shown.

This is just an example of good practice and is most desirable for virtual appliances due to the software version’s compatibility with the server infrastructure on which it is booting, well-paired interfaces, etc.

1.2: Login

You can do the initial login from GUI and CLI mode. After 7-8 minutes as needed for a complete boot, the login prompt will be displayed. The default username and password are admin/admin. If Palo Alto does not let you in, it is likely since the firewall database has not been fully booted, so wait a few more minutes. After pressing enter, another prompt will appear where you will be asked to enter a new password. By default, password complexity is enabled, so your password must be a minimum of 8 characters, with a combination of letters, numbers, and a special character. You can turn off password complexity later.

1.3: Palo Alto Web User Interface


Palo Alto firewall is divided into 7 tabs.

1.3.1. Dashboard

1.3.3. Monitor – Your Best Troubleshoot Friend

The Monitor tab displays all traffic logs that pass through the firewall. In the case of security policies, it is necessary to enable logging in for each policy. Some fundamental division: Traffic logs, Threat logs, URL Filtering logs … For each of them, you can view the time, source, destination, users, applications, and ports/services that each session left behind.

1.3.4. Policies

A policy tab allows you to create policies to manage your firewall traffic. Each policy must contain the name, source zone, destination zone, and the action to be performed on the traffic that matches that policy.

If it is a policy that regulates traffic to and from the Internet, you need NAT policies that go hand in hand with security policies. There are several types of NAT policies, and each of them is used in specific cases. Many-to-One, Hide NAT, Source NAT, One-to-one NAT, Static NAT, Bi-Directional and Uni-Directional NAT policies.

To properly understand traffic flow, it is necessary to understand the logic by which Palo Alto manages the sessions. The scheme below represents a simplified (really simplified) version of the traffic flow. You can find a more detailed description in the link. Palo Alto is a firewall whose rules are based on zones and apply its rules to zones, not to interfaces.

1.3.5. Object

The object card is used to name all the most frequently used IP addresses, services, applications with more recognizable names to make them easier to use later in policies. By using objects, you help yourself later if there is a change in the object’s attribute, for example, a change in the public IP address, that only by changing the IP address within the object, change that IP address in all places where you called that object. It saves you time and avoids potential problems if you change that IP address everywhere. Also, it is easier to remember object names than IP addresses.


1.3.6. Network

The network card is where you configure the interfaces you associate with zones. These can be Ethernet, Vlan, loopback, and tunnel interfaces. There you will also set up VLANs, routing, DHCP pools, VPN tunnels, QoS profiles …

1.3.7. Device

The tab device is where you set up your firewall under the hood. Here, you will find settings for the management interface, DNS, updates, high availability (HA), logins, administrators, certificates, authentication servers, and profiles.

We went through the whole firewall, in a nutshell, to later know where to configure some Palo Alto feature, and there are plenty of them. First, I will mention security policies, NAT policies, IPsec tunnels, DHCP, monitor card management, high availability, routing, and many others.

But first of all, what to do first after building a new system? Before you put a firewall into production, set up software first. I mean the latest definition updates for Anti-Virus, Anti-Spyware, Vulnerability Protection profile, URL, Filtering, WildFire Analysis, etc. Note, all these add-ons are related to licenses. Therefore, it is first necessary to license the device to make updates.

2. Registration

Before you start using Palo Alto, it is necessary to register it on the portal, activate the licenses on the portal, and activate the same licenses within the Palo Alto firewall. Register Palo Alto on Palo Alto Customer Support Portal. If it is a physical firewall, use the dashboard’s serial number. For VM versions, you need the auth code from the purchase order. Follow the wizard and fill in the required information, and thus you have registered the firewall.

When you buy the VM version of Palo Alta, you also receive a set of authorization codes via email. Enter these authorization codes on the portal within the newly registered firewall. Add a new authorization code (red arrow) or download licenses to manually upload them on Palo Alto (blue arrow).

3. Licensing

Devices > Licenses

To automatically activate licenses within the firewall, you need to configure the IP address, mask, default network interface, and DNS server. You can do the licensing manually by uploading the licenses in the format number-threat.key, where the word threat represents the firewall feature you bought, and it can also be DNS, Support, Threats, URL, Wildfire… You can even automatically retrieve licenses by clicking on the Retrieve license key from the license server, after which Palo Alto will retrieve the licenses from the server. (Internet connection required).

4. Initial setting
4.1. Management

Device > Setup > Management

  • Configure the hostname and domain name
  • Check Accept DHCP server provided Hostname or Accept DHCP server provided Domain if the management interface is set via DHCP.
  • Login Banner is an optional text that will be displayed to each user as a warning or notification before logging in
  • Latitude and Longitude are used to position the firewall on the map within the ACC and Monitor tabs.

4.2. Configuration

Device > Setup > Operations

Used to manage the configuration, reset, and shut down the firewall.

When your Palo Alto maintains multiple administrators, commit/config lock is a very welcome command.

The case of multiple admins

4.3. Services

Device> Setup> Services

Used to configure the DNS and NTP servers of your firewall. DNS is required for Palo Alto to retrieve updates from the server. NTP is optional but is recommended.

4.3.1 Service Route Configuration

Device > Setup > Services

Here you set the way Palo Alto communicates with the update server ( By default, this is the management interface. Still, if you do not want external networks to access your management network, the option is to configure another interface through which the firewall will receive the update.

4.4. Management interfaces

Device > Setup > Interfaces

Here you can set the management interface and the address by which you will access the firewall. By default, HTTPS, SSH, and Ping are enabled. (HTTPS for web interface management access, SSH for CLI access, and Ping for interface response, HA capabilities, etc.). Permitted IP Addresses represent the range of addresses you will allow access to the firewall.

5. Updates
5.1 Update security profiles

  Device> Dynamic Updates

Palo Alto firewall publishes new definitions for Antivirus, Applications, and Threads, Wildfire, URL Filtering, etc., daily. To receive certain updates for a particular security profile, you must have an active license for that profile. To fully protect your infrastructure, you must purchase and activate an Antivirus, Wildfire, and Threat Prevention license separately. You can also configure how often Palo Alto will check new definitions.

5.2 Palo Alto PAN-OS update

Device > Software

Palo Alto requires updating its software to the latest version to maintain the highest protection level. However, good practice says that the large update versions jump should be waited for because these versions usually carry some initial bugs that are later corrected through x.0.1 x.0.2, etc. Updating PAN-OS software has its way. In the image below, you can see the latest available versions to update Palo Alto.

5.3. PAN-OS Update Guide
  1. Download the latest updates for licensed profiles and install them (Antivirus, Application and Threads, Wildfire, URL Filtering … etc.).
  2. Go to Software and do Check Now the latest PAN-OS versions.
  3. Suppose your last version is 9.0.4, the next version you need to install is 9.0.12, then 9.1.0, and then 9.1.6, then 10.0.0, and then 10.0.3.

Update path: 9.0.4> 9.0.12> 9.1.0> 9.1.6> 10.0.0> 10.0.3

  1. After clicking Download, Palo Alto will download the installation and prepare it for installation.
  2. Click Install; it takes ~ 15 minutes for Palo Alto to be installed, reset, and rebooted.

Note # 1. Between each update, update and install all available latest security profile definitions. Some versions of PAN-OS will not be available without these latest versions.

Note # 2. Before each update, commit the latest changes, save and export the configuration.

Note # 3. After each updated jump, check the firewall’s basic functionalities (traffic, logs, ping services, etc.).

6. Administration
6.1. Administrators

The admin user is the default administrator of the Palo Alto firewall. Besides, you can add additional administrators and give each of them certain privileges.

Note # 1: This is not the same user database as the one you can find at the bottom of the Device tab.

7. High Availability

This solution provides uninterrupted service and security in a firewall failure event. Ability to work in Active/Active and Active/Standby mode. In Active/Active mode, both firewalls work together. Traffic flows through both firewalls, and in case of failure of one, the other firewall takes over the load of the failed one. In Active/Standby mode, only one firewall regulates traffic. A firewall in standby mode monitors sessions and waits for the active firewall to stop responding to ping packets. In case of Active failure, the Standby firewall takes over all traffic.

7.1 Synchronization

Synchronize networks, objects, policies, certificates, and session tables. Synchronization is performed only after the commit command is executed.

Not synchronized: management interface setting, high availability setting, logs (for RA 200 series), and information from ASS card.

7.2 Prerequisites

Before you can enable high availability on Palo Alto, both devices must:

  • Be the same model
  • Be the same PAN-OS version
  • Updated Applications, URL, and Threat databases
  • HA interfaces of the same type
  • Have the same licenses


In the virtual firewall case, the same hypervisor and number of processor cores are required.

7.3 Connection

Depending on the firewall model, the connection method also differs. These could be specifically assigned ports on the firewall and any other L3 ports if the firewall did not come with HA built-in ports.

HA1 link: Used for traffic management, and it is L3 link: Hello, heartbeats, HA states, routing, user ID, configuration changes, etc. Both firewalls exchange Hello and heartbeats packets at a configured interval. Heartbeats represent an ICMP ping to another firewall, and the response to that ping means that the active firewall is operational.

HA2 link: Represents data traffic, and it is an L2 link. It can also be L3 if the data link is not in the same network. With HA2 link, sessions, forwarding tables, IPsec associations, ARP tables, etc., are exchanged. It is one-way traffic and flows from the active to the passive firewall.

Backup links for HA1 and HA2 are redundant and can prevent split-brain scenarios from occurring where both devices think they are the master. When you configure backup links, the main and backup connections’ IP addresses must not overlap. Backup links must be on another network and other interfaces.


7.4 Failure Detection

The firewall uses several monitors to detect unavailability. Hello, and heartbeats messages to check if the Active firewall responds. Hello packets are sent to the active device to determine the device’s status. Heartbeats (ICMP ping) via a control link to determine availability and connectivity. You can configure the firewall to monitor the status of physical links interfaces. You can also configure the trigger when the switch occurs. Whether when any or all interfaces fall. By default, failover will occur when any link goes down, the ping interval is >200 milliseconds, or three bound pings fail.

7.5 High availability setting

Device > High Availability

Example of Active/Standby active firewall mode configuration

Пре него што наставите…
Претплатите се на наш месечни билтен и будите у току са свим што се дешава у индустрији!

Storageless Data – agility at its finest

Detach your data from bare-metal machines with a global file system from Hammerspace.

Storageless Data – agility at its finest

Detach your data from bare-metal machines with a global file system from Hammerspace.

We should write the word agility everywhere to remind ourselves how crucial it is for businesses these days. It is tough to build a big and powerful organization; it takes time to find people you trust with the skillset, knowledge, and experience that you need. But even if you do it, constant market changes force you to react fast to succeed! The road to success is built on a good team and a fast, precise reaction.
IT can always help with the speed of things if the approach is correct. It can be a jet engine speeding up your business if you choose the right way to attach it. With the right approach, IT can deliver the agility that your business needs to answer the market’s demands. We always think what the one thing that slows us down the most is. For data, the machine is the culprit that holds data down, making it attached to the bare-metal storage machine, trapping it on the specific location. Moving the from If we could imagine data without storage, detach it from bare-metal devices, we could imagine sky-high speed and agility for your business.


Of course, we are far from having data stored on anything other than bare-metal storage drives, but a new term, storageless data, is something the industry has recently started to use. There is a lot of ambiguity in the term; some consider it a marketing hype while others consider it a real thing. The contradiction in terms originates from the inability of the data to exist outside of the storage. So the question is: what is storageless data then?
Using the term storageless doesn’t mean that your data doesn’t live on storage. Ultimately, compute must run on servers, and data must live on a bare-metal storage device. As far as serverless computing goes, the term’s contradiction emphasizes that we do not need to care about the mapping-to-server if we want to get our jobs done. In the same way, the term storageless data is a slap in a face contradiction that tells that even if I need to work with my data, I don’t want to think about the underlying storage infrastructure. It is a consumer-centric approach because it puts in front the perspective of the users who works with the data, rather than the perspective of the IT operative who works with infrastructure. With Hammerspace, you will get your job done without the need to think about mapping to storage and the underlying infrastructure. Essentially, this concept can be described as data as a service.

Of course, there is a lot more to Hammerspace than ultimate data agility. It overcomes the siloed nature of hybrid cloud storage and delivers global data access, enables the use of any storage in hybrid cloud infrastructures, and is built for high performance. Hammerspace is a global file system that grants you access to your data from any cloud and across any infrastructure. It serves, manages, and protects data on any infrastructure… anywhere. Ultimately, Hammerspace is modernizing data workflows so they can move from IT-centric to business-centric data.

The HAMMERSPACE advantage

1. Cost profiling

Hammerspace platform constantly monitors available storage infrastructure and data behavior to predict the cost of different tiering scenarios, whether on-premises or in the cloud. It provides you with valuable and accurate information that allows you to concentrate on your infrastructure costs, allowing you to make informed business decisions.


2. Business objectives oriented automation

Now it is possible to teach your infrastructure about the nature of your business. With Hammerspace, you can define your business objectives, and the software will create extensible user-defined metadata that will help the machine learning mechanism to tier and automate the data across storage, sites, and clouds in the best way for your business.


3. Multi-site, multi-cloud

Universal global namespace, virtualized and replicated at a file-level granularity, enables active-active data accessibility across all sites. It allows access to your data from any cloud and across any infrastructure.


4. Data management at a file-level

Hammerspace enables you to manage your data to the level of a particular file. But the real benefit of this technology is that file granular data management is the only way to efficiently scale across complex mixed infrastructure without making unnecessary copies of entire data-volumes.


5. No disruptions

There are two aspects of the non-disruptiveness of Hammerspace. Zero-downtime assimilation that quickly brings data online and live data mobility technology that eliminates migration disruptions. Those two technologies combined make your data highly accessible and secure at any time.


6. High performance

Hammerspace delivers high-performance across hybrid clouds by simplifying performance and capacity planning. The planning is simplified through parallel and scale-out filesystem with direct data access.


Aside from mentioned features and advantages, Hammerspace offers a long list of useful features:

1. Native Kubernetes Support
2. Share-level snapshots
3. Undelete
4. Data replication
5. Real-time analytics
6. Support for NFS, SMB, and S3
7. Global dedupe & compression
8. WORM data-lock
9. Data-in-place assimilation
10. Data virtualization
11. Programmable REST API
12. Kubernetes CSI driver
13. Third-party KMS integration, etc.,…

We recommend you to check out this awesome video of Hammerspce CEO David Flynn explaining the concept of storageless data. Braineeing engineers are happy to work directly with Hammersapce to ensure that the infrastructure you receive is custom-tailored for your business. If you have any additional questions, want our engineers to estimate the state of your infrastructure and its compatibility with Hammerspace technology, or just want to chat, feel free to call us!

Пре него што наставите…
Претплатите се на наш месечни билтен и будите у току са свим што се дешава у индустрији!

Compose Bare-Metal Machines in Seconds!

Liqid's concept of disaggregated composable infrastructure

Compose Bare-Metal Machines in Seconds!

Liqid's concept of disaggregated composable infrastructure

IT infrastructure and its subsistence on the market play a significant role in a company or an organization. IT infrastructure enables the company to develop value in a particular time required to deliver what the market wants. To achieve this, the company or its IT department must overcome two main challenges:

  1. The span of services IT needs to perform; i.e., the capacity that produces complexity,
  2. The time it needs to change to answer the challenges the market produces, i.e., the flexibility.

Today, the company strives to reduce both the complexity and the time to establish an IT infrastructure that should meet the market’s needs. Liquid gives us the solution for both.

Composable infrastructure was born as a combination of some of the best parts of Converged and Hyperconverged infrastructure, unlike HCI, which inserts a software virtualization layer. Composable infrastructure maps bare-metal resources to create a server instance that can be configured in a short period, at the request of the user or application reconfigured if necessary. We take a building-block-based approach with composable solutions, where resources are implemented as disaggregated pools and managed dynamically through the software.


Picture 1. Liqid


Liqid is the global leader in software-defined composable infrastructure, delivering the optimal adaptive architecture for next-generation AI-driven applications in HPC environments. The Liqid Composable platform empowers users to manage, scale, and configure physical bare-metal server systems in seconds and then reallocate core data center devices on-demand as workflows and business needs evolve.

The adaptive breadth of the platform makes it the most comprehensive composable infrastructure solution in the market today. For example, Liquid use cases vary from AI and Deep Learning, HPC, Clustering, to Dynamic Cloud and 5G Edge.

Liqid Composable Infrastructure is designed to address and solve problems by enabling valuable resources to be deployed through software in just the right amounts and with a minimum time spent on IT operations across PCIe fabric. Liqid Composable Infrastructure is interconnecting pools of compute, networking, data storage, and graphics processing devices to deliver incredibly advanced results through the PCIe fabric. The IT resources required to support applications would be deployed on precisely the right hardware content and at exactly the right time and then redeployed to another application when no longer needed. Liqid made time to configure, for resource capacity and infrastructure complexity, lower than ever, preserving high precision.


Picture 2. Liqid Composable Disaggregated Infrastructure


Liqid enables the ideal use of IT resources. There are no idle steps due to setup configuration or expansion, saving time and money. The solution consists of three primary components:

  • Command Center to manage the whole system,
  • Fabric to connect various parts of the system,
  • and compute resources.

Picture 3. Liqid components


Liqid Command Center is the powerful management software platform that automates, orchestrates, and composes physical computer systems from pools of bare-metal resources. It provides administrators with a way to graphically and programmatically create the desired configuration systems using compute, storage, network, GPU, and other resources present on the Fabric.


Picture 4. Liqid Command Center




The Fabric is a crucial component of a Liqid solution. All components in a server (i.e., CPU, memory, storage, GPUs) are connected through the PCIe bus. The Liqid solution disaggregates devices, removing them from one physical point and placing them in designated physical enclosures.  The Liqid software maps this disaggregates devices across a PCIe fabric via copper or optical connection.  Liqid Fabric switches offer low enough latency and allow the components to be disaggregated. Port-to-port measurements showed that the latest PCIe 4.0 Liqid fabric switch has a low latency below 100 nanoseconds, small enough to be non-impactful to the complex application.

Picture 5. Liqid Fabric


In collaboration with Brodaccom, Liqid has developed new PCIe 4 generation switches, and the real example is Liqid Grid LQD9448 48 Port Gen 4 Fabric Switch. This scalable and highly available PCIe 4.0 switch doubles throughput regarding PCIe 3.0, which allows the bandwidth of up to 256 GT/s(Giga transfers per second) per port, which reduces latency and increases performance.

This communication method, via PCIe fabric, enables Liqid to avoid additional protocol translation and make easier and faster communication between components. Using the native PCie network allows Liqid to combine many different resources and dynamically assign them to the physical server that the user needs for a particular application.



The Liqid composable disaggregated infrastructure uses software to create bare-metal server deploys pools of resources using commodity hardware of Optane-based Memory, NVMe, NIC’s, GPU’s, FPGA’s. Compute resources are provided by commodity x86 servers containing both CPU and RAM. The Liqid Command Center software is responsible for the orchestration and connecting resources from approved hardware across ultra-low latency PCIe fabric and all major data center fabrics, including PCI-Express, Ethernet, and Infiniband.


Picture 6. Liqid resources


Components such as GPUs, FPGAs, NICs, SSDs, and AICs are placed in chassis, which can be connected via PCIe or Ethernet. There are several different ones, such as LQD300x20X Expansion Chassis, LQD300x04X Expansion Chassis, and LQD300x24X Expansion Chassis.


More information:


Liqid delivers improved resources with software-defined infrastructure composability for solid-state non-volatile memory (NVMe) solutions. Liqid enables IT users to compose NVMe devices alongside FPGAs, CPUs, GPUs, NICs, and other PCIe-connected technologies. Composable NVMe on Liqid intelligent fabrics expands the promise of composability, allowing IT users to create and maintain a more efficient data center infrastructure. Likewise, Liqid enables the clustering of dozens of GPUs for the most demanding workloads, real-time GPU to CPU allocation, and even CPU bypass for „peer-to-peer“ data transactions. High-performance GPU resources are allocated through software at bare-metal across Liqid’s ultra-low-latency platform, simplifying scalability and resource orchestration. Composable Intel Optane Technology combines the unparalleled high throughput, low latency, quality of service (QoS), and endurance of Intel Optane SSDs with Liqid’s industry-leading PCIe fabrics. It provides composability with the scaling of capacity and bandwidth.  Achieving DRAM-like data speeds, Intel Optane SSDs are allocated directly across PCIe as bare-metal resources.

Composable NVMe, GPU, and Intel Optane unlock the performance and efficiency necessary for a wide variety of use cases in artificial intelligence and machine learning, high-performance computing, cloud, and edge computing environments.


More information:




Пре него што наставите…
Претплатите се на наш месечни билтен и будите у току са свим што се дешава у индустрији!