Aerohive HiveOS 3.2 HiveUI
List Price: Included in HiveAP 320 ($1,299) or HiveAP 340 ($1,499)
Pros: Quick-and-easy setup; built-in RADIUS server and captive portal; attractively priced
Cons: Little status monitoring; no reporting; limited to one hive with up to 12 members
Aerohive’s cooperative control strikes a unique balance in the WLAN architecture debate. HiveAPs are provisioned by a central manager, but otherwise operate autonomously. HiveAPs not only forward, filter, and shape traffic on their own–they even seek out neighbors to form a self-healing adaptive mesh that depends neither upon a WLAN controller nor a root node.
But until recently, a HiveManager appliance was required to configure HiveAPs. Starting at $2,999, that appliance was less expensive than enterprise WLAN controllers, but prohibitive for small businesses with just a few HiveAPs. In HiveOS release 3.2r1, Aerohive took aim at this untapped market by releasing HiveUI: an embedded Web manager that can provision up to 12 HiveAPs without a HiveManager appliance. To learn about the benefits and limitations of HiveUI, we decided to use it to build our own little “hive.”
Assembling the pieces
Any Series 300 (802.11n) HiveAP running HiveOS 3.2r1 can be used to manage up to a dozen HiveAPs, including older Series 20 (802.11abg) HiveAPs. For our review, we used one HiveAP 340 to provision a total of three HiveAP 340s and one HiveAP 320.
Both HiveAP models are currently available at a promotional price of $999 each, so our four-node dual-radio WLAN retailed for $3995. In this small WLAN, doing without HiveManager cut our CAPX a compelling 43 percent. But HiveUI isn’t an embedded HiveManager–it’s a scaled-down GUI, designed for quick-and-easy set-up of small hives with relatively basic needs.
For example, starting a hive is faster with HiveUI than with HiveManager. Just plug one HiveAP into your wired network, open a Web browser to that AP’s DHCP-assigned IP, and launch the Startup Configuration page (see Figure 1). Promote that HiveAP to run HiveUI by checking “Server for WLAN Management;” all other HiveAPs default to “Client for WLAN Management.” The only parameters that absolutely must be configured here are a hive name and the passphrase used to secure future traffic between hive members with WPA2-PSK.
Figure 1. HiveUI Startup Configuration
Like HiveManager, a HiveAP that runs HiveUI operates as a CAPWAP server. By default, all HiveAPs behave as CAPWAP clients, periodically broadcasting Discovery Requests until a Discovery Response is received from a CAPWAP Server. If the CAPWAP server is on the same wired or wireless segment, it hears those Discovery Requests and responds to them. If the CAPWAP server resides elsewhere, HiveAPs will find it by sending Discovery Requests to any IP bound to the hostname “hivemanager” or designated using the Startup Configuration page.
This standard CAPWAP exchange is how HiveUI learns about each brand new (or freshly reset) HiveAP on your network. However, auto-discovered HiveAPs cannot actually join your hive until an administrator moves them to the Managed HiveAP list. CAPWAP Join Request/Response messages are then exchanged, establishing the secure session that HiveUI uses to provision and maintain members of a single hive.
Getting this far took us five minutes when using a HiveAP with the latest HiveOS. Unfortunately, the first HiveAP we tried ran older firmware that merely burped out a cryptic error: Failed opening required AhWebUIConf.class.php5. In fact, 2 of 5 HiveAPs arrived with older firmware because our tests started right after HiveOS 3.2r1 was announced. This temporary mismatch was easily corrected, but played a role in a few other hiccups we encountered.
For comparison, we repeated set-up using a full-blown HiveManager appliance. It took us roughly 30 minutes to reach this same point because HiveManager requires console port CLI initialization, followed by Web map initialization. Topology maps are one of many features found in HiveManager, but not HiveUI–for good reason. With a dozen APs, creating tiered maps would add complexity for little benefit. But for enterprises with hundreds or thousands of HiveAPs, correlating devices to mapped locations (Site/Building/Floor) is an absolute necessity.
Branching out
Like many enterprise-class APs, 300 Series HiveAPs have two PoE-capable 10/100/1000 Ethernet ports for cabling to a wired backhaul network. Any HiveAP tethered this way is said to be operating as a “portal.” (The second Ethernet interface can be used for dual-homing, failover, or providing bridged access to another wired segment.)
Alternatively, HiveAPs can behave as “mesh points” that establish wireless backhaul links through any nearby HiveAP portal. Each HiveAP 320 or 340 has one 2.4 GHz 802.11b/g/n radio (wifi0) and one 5 GHz 802.11an radio (wifi1). By default, wifi0 is configured for access; wifi1 for backhaul. This makes covering an unwired space simple: just provide AC power and the mesh point magically does the rest. So long as at least one other backhaul-capable HiveAP is within range, mesh points determine their own default route onto your wired network, using it to get an IP address and then find the HiveAP running HiveUI.
In many enterprise WLANs, PoE provides simultaneous power and backhaul connectivity. However, in small businesses where PoE is uncommon, HiveAPs that are powered up before being cabled to Ethernet may form unintended wireless backhaul links. But even if accidental backhaul links do form, HiveAPs always fall back to Ethernet when available. (If you want wireless backhaul to take precedence, reconfigure Ethernet interfaces to bridge mode.)
WLANs that use wireless backhaul can do so in two ways: dedicate one radio to full-time backhaul (the default) or use both radios for access while designating one for backhaul failover. To enable the latter, radio profiles must be configured to trigger wireless failover whenever Ethernet is down for X seconds, reverting after Ethernet is back up for Y seconds. In practice, we found this a bit tricky; HiveAPs must failover to a backhaul band/channel offered by a nearby portal–that cannot happen if everyone else is still using both radios for access.
In fact, this failover configuration is one of the few wireless mesh options surfaced by HiveUI. Most other hive options exposed by HiveManager are hidden by HiveUI, including backhaul thresholds and traffic filters. In a pinch, HiveUI admins can use the HiveAP CLI to query mesh status (e.g., show amrp neighbor) as shown in Figure 2.
Figure 2. Managed Hive composed of portal and mesh points
We agree that small WLAN admins generally should not tweak parameters like min signal strength required to form a backhaul link–useful in larger high-density WLANs. For small businesses, mesh networks that are self-forming and self-healing are best. However, we think this audience could benefit from backhaul status change and traffic utilization summaries not now provided by HiveUI. For example, if essential parameters like the hive passphrase or backhaul radio profile are modified while a mesh point is down, it can’t reconnect–small WLAN admins may need hints to resolve such problems.
Figure 3. Basic WLAN configuration. Click to enlarge.
Anything the least bit complicated is hidden under Optional or Advanced tabs (see Figure 3). A novice configuring his first small business WLAN gets a GUI that feels familiar, while experienced admins can dig deeper as needed. For example, those that don’t need VLAN tags or QoS priorities will not be distracted by these hidden Advanced Settings, readily accessible on the same page.
In our view, HiveUI succeeds in keeping basic tasks simple without reducing everyone to that low common denominator. But there are limits to what can be accomplished this way. By default, each SSID is bound to one auto-generated user profile and default priority. Alternatively, those with multi-purpose WLANs can configure QoS objects that apply per-user and per-priority rate limits and scheduling algorithms. Admins that head down the latter path must then bind QoS objects to uniquely-numbered user profiles that determine how authorized traffic will be treated, based upon attributes assigned at authentication time.
While these profiles are not rocket science, they are a hefty step beyond basic WLAN setup. On the other hand, many advanced options are not accessible through HiveUI–including Aerohive’s new Dynamic Airtime Scheduling (see Part 2 of our review). Larger businesses that use HiveUI to experiment with a small test WLAN can step up to HiveManager when ready for rollout–but they won’t have full access to enterprise-class options until they do.
User authentication
For our test, we configured three SSIDs: a guest WLAN, a WPA2-Personal (PSK) WLAN, and a WPA2-Enterprise (802.1X) WLAN. Our WPA2-Personal WLAN was no more difficult to configure than on most SOHO APs. Small businesses often stop there–open guest WLANs invite abuse, while 802.1X requires infrastructure they don’t have. HiveUI can help small businesses circumvent both of these common pain-points.
For starters, HiveUI provides a “zero-config” internal RADIUS server. Aimed squarely at small businesses, this on-board HiveUI service is a simplified version of HiveManager’s RADIUS server. In enterprise 802.1X deployments, admin configures one or more RADIUS servers with digital certificates, user/group policies, and access to a user account database like ActiveDirectory. Small businesses that need this can still configure HiveUI to consult an external RADIUS server for any SSID.
However, those that want 802.1X benefits without this complexity can use the HiveUI internal server to authenticate up to 512 permanent or temporary logins. For example, HiveUI can create randomly-generated temporary logins and passwords to be used by guests for up to 120 hours (see Figure 4). While this internal RADIUS server can’t share accounts defined for your Windows domain, it could be just the ticket for many small WLANs.
Figure 4. Embedded RADIUS User Authentication. Click to enlarge.
Beyond go/no-go authentication, each defined login is bound to a user profile associated with an authorized SSID, IP packet filters, bandwidth limits, QoS priorities, and security requirements. For example, we let our temporary users connect to our unencrypted guest SSID, sending only DNS, DHCP, and Web traffic at best effort priority after passing captive portal login. But we let our permanent users connect to our WPA2-Enterprise SSID, sending any kind of AES-encrypted IP traffic (including high priority traffic) after passing 802.1X authentication.
Figure 5. Guest User Profile with Bandwidth and Traffic Restrictions
Here again, HiveUI caters to simple needs, while providing unobtrusive hooks for greater control. Those control objects–user profiles, QoS objects, and firewall rules–are configured using separate HiveUI menus that are slightly more complex (see Figure 5). However, we found one simplification that generated extra work: because each login is bound to just one user profile, anyone allowed to connect to more than one SSID needs multiple logins and passwords.
To facilitate guest access, HiveUI (and HiveManager) provide embedded captive Web portals. Any SSID can be configured to redirect HTTP/HTTPS from unauthenticated clients to AP-resident captive portals that display a RADIUS authentication page or a self-registration page. RADIUS authentication is carried out by an internal or external server as described for 802.1X. Self-registration uses a Web page to request name, e-mail, etc, before permitting unconditional access. HiveManager supports custom landing pages; HiveUI uses Aerohive default pages.
Note that using embedded authentication introduces central dependencies not otherwise present in cooperative control hives. Specifically, if the HiveAP running HiveUI becomes unreachable, existing guest and 802.1X sessions continue, but new sessions cannot be authenticated. A guest who tries to connect will be redirected to his local HiveAP’s captive portal, but timeout on RADIUS authentication. An authenticated 802.1X client can roam to other HiveAPs using fast reconnect and cached pairwise master keys (known to all HiveAPs), but an 802.1X client that deauthenticates cannot reconnect until the HiveAP running HiveUI becomes available.
Figure 6. Dual-band 802.11n radio configurations. Click to enlarge.
Two pre-defined radio profiles are bound to HiveAP 320 and 340 interfaces by default: wifi0 uses radio_ng, while wifi1 uses radio_na. The default radio_ng profile supports 802.11g and 802.11n clients using a 20 MHz channel and 3×3 MIMO, without short guard interval or 802.11b protection. The radio_na profile supports 802.11a and 802.11n clients in a similar fashion.
To modify 802.11n parameters, we created our own radio profiles–the HiveManager “clone” feature is not available in HiveUI. For example, we created a my_radio_na profile to allow only 802.11n clients, using a 40 MHz channel and 3×3 MIMO with short guard interval. We then applied that radio profile to each HiveAP’s wifi1 interface–the ability to apply the same interface update (radio profile, auto/specified channel, access/backhaul mode) to several APs would be nice to avoid inconsistencies.
Small WLAN admins may not appreciate potential impacts, like configuring an 802.11b/g/n profile to use 40 MHz channels. To address this, HiveUI provides a fairly extensive context-sensitive, searchable Help system. We found most answers that we looked for in Help, with one noteworthy exception. The same radio cannot provide simultaneous backhaul and access, but this fundamental limitation is buried under SSID Advanced Settings Help.
We noticed this problem after we defined our radio profiles and SSIDs without error, but observed our SSIDs beaconed at 2.4 GHz, but not 5 GHz. Using the CLI to query interface status, we discovered that wifi1 was not bound to our SSIDs. It turns out that wifi1 had been left in its default backhaul mode, causing our 5 GHz SSID assignments to fail. A more prominent warning in the radio profile help section might have helped avoid this, but we also think that HiveUI should have flagged the conflict when the SSID was applied to both bands/radios.
Finally, when HiveManager radio profiles were carried into HiveUI, one advanced knob ended up having no visible impact. With either management system, HiveAP radios can be configured to perform periodic background scans for rogue APs. HiveManager provides policies to classify rogue APs and a window through which to view rogue APs, but HiveUI does not. Rogue detection could be useful to small businesses if carried over more fully into HiveUI.
Provisioning and update
One major difference between HiveManager and HiveUI is the way in which HiveAP configuration changes are applied.
Like a typical enterprise NMS, HiveManager accumulates pending changes until an admin decides to apply them to one or more HiveAPs immediately, after a specified delay, or at next reboot. This approach is appropriate when making a number of related changes that really should be deployed (or rolled back) as a single step, minimizing network or user disruption. HiveManager also provides explicit control over profiles used to auto-provision new Hive APs.
In contrast, HiveUI changes are applied to all affected HiveAPs as soon as the Apply button is clicked on any configuration panel. For example, if an SSID is modified to require a different type of security, HiveUI attempts to push that new configuration to all HiveAPs immediately. If a static channel assignment is changed on one wireless interface, HiveUI updates that HiveAP immediately.
The HiveUI approach stops small WLAN admins from forgetting to activate updates, or having to decide which HiveAPs they should be applied to. However, this approach is most convenient only when all goes well. If one of several HiveAPs is down when a change is made, its current config will not match the HiveUI config. The HiveUI Device list draws attention to mismatches and provides tools to remedy them (see Figure 7). These same tools can also be used to explicitly provision a new HiveAP accepted into an existing hive.
Figure 7. Resolving a HiveUI / HiveAP config mismatch
This is one area where we found HiveUI’s simplified approach somewhat limiting. HiveUI does not let you apply different SSIDs to selected HiveAPs, as you can with HiveManager. HiveUI also does not appear to detect or describe HiveAP config errors as well as HiveManager. On the other hand, HiveUI does let you load firmware onto selected APs only–essential to avoid wireless backhaul link disruption when updating mesh points.
Monitoring and reporting
Transparent HiveAP discovery and simplified configuration are HiveUI’s strong suits. In our experience, we believe that many small business hives can indeed be provisioned and maintained using nothing more than HiveUI. However, HiveUI provides only limited monitoring and no historical reporting. If these capabilities are important to your business, invest in a full-blown HiveManager.
HiveUI Status merely indicates whether HiveAPs are connected or disconnected to the CAPWAP server (i.e., HiveAP running HiveUI.) Syslog file tails can be retrieved from any single HiveAP and CLI commands can be sent to one, some, or all HiveAPs (see Figure 8). For example, show interface shows which virtual AP MAC addresses and auto-selected channels are now used by configured SSIDs, while show ssid <name> station lists currently associated clients and their session attributes.
Figure 8. Using HiveUI cut-through to invoke HiveAP CLI commands. Click to enlarge.
CLI cut-through is helpful but not a replacement for real-time status monitoring and reporting. For example, CLI commands proved helpful when we needed to diagnose our SSID / interface / radio profile configuration error. But we found them insufficient to debug authentication failures caused by defining SSIDs that required internal RADIUS authentication when we had not yet enabled that service.
Small businesses may need to combine HiveUI with shareware WLAN analysis tools to trouble-shoot problems and monitor WLAN utilization. Larger businesses that buy into HiveManager receive a healthy collection of near-real-time and historical reports and graphs that did not make their way into HiveUI. We would hope to see HiveUI narrow this gap in a future release by giving small businesses a few basic reports–for example, an SSID traffic summary and a client session log. In the meantime, a CLI “cheat sheet” for small WLAN admins would be handy.
Conclusion
With HiveUI, small businesses can tap Aerohive’s cooperative control architecture to build feature-rich WLANs without having to pay for enterprise controllers or management systems.
New admins will not only find HiveUI approachable, but may build more secure WLANs by using the embedded captive Web portal, RADIUS server, and IP firewall. Those who require more control–especially per-user QoS priorities and bandwidth limits–can dig deeper without having to spend $3K for a HiveManager.
However, companies with more than a dozen HiveAPs–including those with small remote office WLANs–should step up to HiveManager to get features required by large distributed networks, including topo maps, real-time monitoring, and historical reporting. Stay tuned for part 2 of our Aerohive review, where we’ll use HiveManager to explore alternative QoS strategies.
Lisa Phifer owns Core Competence, a consulting firm focused on business use of emerging network and security technologies. With over 25 years of experience in the NetSec industry, she has been involved in wireless product and service design, implementation, and testing since 1997.