The latest industry news from the Brickworkx experts
We believe that constant education, both for ourselves and for our customers, is a key piece of the puzzle for success. The Brickworkx team likes to share their knowledge and experiences as much as possible. We’re often either at a customer’s location learning from them or at a conference, seminar or industry trade show gleaning as much knowledge as possible. The Brickworkx blog is our outlet to share this information. We welcome your comments and feedback.
One of the unique things about EVOTEK is the opportunity for every employee to earn a spot in our partner program. Being a Partner is an opportunity to help steer the direction of the business, participate in the inner workings of the organization, and being a growth catalyst for our customers. My personal journey to partner included making a jump from a very comfortable position as one of the top performers within a good organization into the unknown to build a services organization within a small scrappy startup with what I believed to be great vision and a lot of heart. The ensuing two and a half years have been an incredible journey with many challenges, learning opportunities, and milestones and have yielded incredible personal and financial growth. While it wasn’t the easy path, it has certainly been a remarkable and enjoyable one.
This summer I felt there was another hour of opportunity at hand—making a move out of my comfort zone in beautiful Southern California to the heat and bright lights of Las Vegas. As a Partner and leader within the organization a move away from headquarters may seem counterintuitive, but I believe there is tremendous opportunity for disruption through innovation within Nevada as it caters to user experience and disposable income. I approached our founder, Cesar Enciso, with my idea and desire to make a move and received full support.
EVOTEK has been on an incredible growth curve since its inception and I believe that it can be directly attributed to hiring the right people, believing in them to do what is right, and lastly supporting them in pursuing these opportunities. We are on the hunt for like minded individuals who want more out of their career and need someone to believe in them. There is never a “right” time, but this is an hour of opportunity in which a life changing decision can be made. It was absolutely the correct decision for me and if you are a driven and customer focused individual I am certain it’ll be the right place for you as well. Seize this opportunity and make EVOTEK the last job that you will ever have.
Engaging customers via their mobile devices is an exciting proposition for many organizations; however, it has to be done with care. These solutions often carry a significant cost and depend on a Return on Investment (ROI) model to make sense.
Achieving this ROI requires walking a fine line between meaningful engagement and being a nuisance. Here are five best practices to help you do that.
5 ways to ensure your mobile strategy works
1. Think big picture
The goal is to create a user experience that provides vast amounts of data to the organization while delivering value to the customer. Accomplishing that means the experience needs to be immersive and omni-channel (e.g., SMS, email, app-based, digital signage, direct mail, etc.).
Too many organizations jump straight to the mobile application without realizing adoption of mobile applications is low and retention of those mobile apps is even lower. A holistic approach that embraces the web (traditional and mobile), mobile apps, digital and physical signage, and some of the emerging areas such as augmented reality (AR) and context-aware chatbots will be far more successful.
Analytics and business intelligence tools must be included because understanding the success of these messages and their impact on the bottom line is a necessity, as engagement attempts that are ill-received may create a negative effect on the business.
2. Establish a baseline
Before rolling out any new engagement solution or even a single targeted campaign, it is important to understand the baseline. What is normal for a specific time of day, day of week, demographic, location, etc.
If there are areas in which these baselines are unknown, the success of an engagement will also likely be unknown. The length of time to determine a credible baseline depends on business and vertical; however, a month of data will provide statistically valuable data for many organizations.
3. Consider your social credibility
Each engagement or touchpoint with the user must be carefully weighed prior to being implemented, as the organization is spending “social credibility” with the customer in issuing these engagements. Determining that a message is hitting the right person at the right time and place is paramount to success.
While the organization may want to drive a specific behavior, it must be done in such a way that it is graciously accepted by the recipient. For less important messages, consider other channels for delivery—email, direct mail and digital signage integrations are options that are less invasive than a targeted push message.
4. Leverage employee engagement
Business should ensure the human component isn’t lost in this digital marketing frenzy.
Consider a scenario in which an employee could be notified when a user has spent more than five minutes in front of a specific retail display or there has been a high density of users in line for a drink at a sports game or concert venue. Rather than trying to ping users to have them go find another bar, consider triggers that have an employee come over with a mobile payment system and perform line-breaking transactions. This human component may still be considered a “digital engagement,” but it won’t feel like it to the consumer.
5. Keep it fresh
Digital engagements should always be timely and relevant. Organizations can’t afford to be lazy about managing these platforms because pushing irrelevant messages will drive away customers, cause them to remove their mobile apps, and even consider competitors.
Campaigns should also create a sense of urgency—create a fear of missing out or at least ensure customers understand this immediate deal is good for only the first 100 redemptions.
Gamification is one way to keep things interesting for consumers, and it can drive additional spend as it may promise “bonus” rewards for the additional engagement. The solutions should be simple enough that they can be managed by marketing teams and not IT.
Heading into Aruba Atmosphere this year I was most excited to see Aruba’s new Niara solution in action and learn more about this product as it solves a very real need in every network. Inherently any network policy grants some sort of access to the network and users are free to work within the confines of that policy. Even using 802.1X-based authentication and dynamically provisioned VLANs, access roles, downloadable ACLs, etc. isn’t necessarily enough. Niara solves for these issues in an appealing way and lessens the workloads for SecOps teams.
Case #1: Stolen Credentials
A known valid user can operate within their policy, but what happens if they are compromised either through social engineering, weak passwords, poor password management, etc.? Niara builds a profile of what is typical behavior of a specific user, if their patterns change this will be identified by the system. Perhaps the user starts attempting to access new areas or is visiting new websites—by a change in behavior, it is possible to identify a need for a change in policy, alert the SecOps team, or eventually automate remediation or lockdown of the user. Comparing to a baseline as well as other similar users gives Niara a frame of reference for the user under evaluation.
Case #2: Malware and Viruses
Both malware and viruses are capable of changing the behavior of network attached clients, while numerous tools already exist to help combat these Niara could serve as a welcome tool to identify and isolate infected clients or in a perfect world learn about how a Day Zero Attack might attempt to compromise the network and automatically harden the network in anticipation of this attack. The combination of these capabilities along with Aruba’s open APIs using Aruba’s Exchange offers some very interesting possibilities by enabling the collection of data from ecosystem partners with a greater speciality in the malware and virus arena. Imagine a world in which your firewall vendor has detected a new type of malware, shares that data with Aruba ClearPass and Niara via APIs, syslog, SIEM, or other similar routes and then the network automatically reacts to prevent the spread of that malware at the same time you are being notified.
Case #3: Software Bugs/Anomalous Behavior
If an application is updated and begins to operate differently on the network, Niara can identify this and enable teams to understand the new behavior. New behaviors deemed as risky can be mitigated against and feedback can be provided to the company’s development team. A specific example of this was provided at the conference in a popular file share company who’s update generated unwanted traffic on the network. Niara’s machine learning was able to identify and allow this undesirable behavior to be stopped.
Aruba, a Hewlett Packard Enterprise Company opens the door to a world of possibilities with the addition of machine learning and extends those capabilities elegantly through their open architecture in Aruba Exchange. I would anticipate that this field of machine learning is going to explode in the networking world as IT teams are facing increasingly difficult security challenges and are being asked to do more with less people and less resources. Automation of detection and defense should be able to solve 75-80% of the issues out there, enabling IT to focus on the most challenging and highest value problems out there.
February is an exciting month this year as there are two of my favorite conferences held back to back weeks—Wireless LAN Pros Conference in Phoenix, AZ and then Aruba Atmosphere in Nashville, TN. This year I have opted to present at the WLPC conference on the WLAN Engineer’s role in Digital Disruption and was invited to participate in the Tech Field Day live panel for Atmosphere. I have allocated a portion of my weekends in preparing for these events to ensure that I do my best and that the group benefits from the time that I have been allocated. Despite this being my third year doing these events, I am always amazed how much I learn from preparing to teach others. My goals for these two events for this year are the following:
Keeping it Simple
We have a varied demographic at these events, so one of my goals is to explain the content in a simple way without “dumbing it down”. I’ve found this to be a great source of my own personal learning as I need to ensure that I fully understand the topic first to do so without destroying what it is that I am trying to get across.
Everyone that attends these events comes from a different background and has their own life experiences that contribute to their value base and their viewpoints. I strive to share my perspective in my presentations and when I have the opportunity to field questions or discuss the content, learn from others perspectives.
My first presentations were the result of Keith Parsons challenging people to step up and share. There are guys who present on some amazing technical material and it can be intimidating, however real world experience is what these conferences are all about and sharing experiences either through a presentation, through the questions that get asked, or even social discussions at the bar are welcome.
I look forward to the discussions ahead and both sharing with and learning from the other attendees.
One of the most promising announcements at Mobility Field Day Live with Aruba, a Hewlett Packard Enterprise company for me had to be the introduction of ClearPass Extensions. The concept behind this feature is to leverage a repository within ClearPass, such that new features may be created and ran without compromising the integrity of the system and the underlying code with some sort of “engineering special”. This functionality adds substantial value to an already feature rich ClearPass product.
ClearPass Extensions enabled Aruba partners such as Microsoft, Intel Security, Kasada, and Envoy to develop innovative features that may be released ahead of a major release of code which improves feature velocity and more importantly client satisfaction.
Currently this is a relatively closed system with Aruba handling the development as a professional services engagement, but as a service oriented partner we see the light at the end of the tunnel and are looking to truly create some differentiating features for our customers that provide tight integration of ClearPass with the business.
Aruba’s vision for where ClearPass Extensions will go includes a developer community and an “app store” enabling customers to download or purchase apps that have been developed specifically for ClearPass. Customers can also develop their own features, or engage any third party to do the integration for them in the future.
Creating an opportunity for partners to differentiate themselves from each other and rewarding those that truly understand their customer’s business is an appealing idea. Waiting on features that may take six months to be released during a standard release punishes those companies who are creative and forward looking.
This model rewards these organizations instead by giving them a competitive advantage and an option to potentially generate additional revenues depending on how the app stores comes to light. The potential opportunities of these extensions are seemingly infinite and the upside for organizations investing in this are tremendous.
Aruba, a Hewlett-Packard Enterprise company, unveiled their new Mobile First Platform last week and I had a front row seat as one of the Mobility Field Day Live delegates. Aruba’s announcement was made a day prior to our session, so it was pretty exciting to discuss such a fresh topic. The foundation that Aruba is creating here is impressive and the implications are tremendous, especially if we look at extrapolating this in the near future.
Aruba announced the release of AOS version 8.0, which marks the first major overhaul of the code in quite some time. This release is at the center of Aruba’s Mobile First Platform and is designed to handle the next ten years of wireless, which is quite an ambitious goal as the near future has 802.11ax (aka Ten Gigabit Wi-Fi). Aruba highlighted that the intelligent layer of services required to run networks today is reaching its limits on controllers, so they have created a new alternative in the form of a Mobility Master that can run these intelligent services on behalf of the controller hardware. The Aruba Mobility Master has been virtualized so that it can run on an x86 virtual machine in VMWare (KVM coming soon with version 8.0.1). This new role replaces the now legacy Master Controller so most environments will benefit from a reduced amount of hardware on-site and can leverage investments already made for the new architecture where desired. Also of interest for most is that there is zero cost for these virtual machines, the only thing that matters is the number of access points are being managed. The primary tradeoff between a controller-based and virtualized infrastructure today is throughput as the VM-based controllers do not have hardware encryption modules and as a result they cap out around 4-5 Gbps.
Aruba has also introduced a new UI with AOS 8.0, which is a welcome feature as it had been fairly complicated for a new user. The new UI brings some much needed features such as simplified profiles, tab completion for profile names in the CLI, multithreading in the CLI, etc.
In-Service Upgrades are also new with the advent of AOS 8.0 and the Mobility Master. The increased compute and storage allow for services that now reside on the Mobility Master to be upgraded and impact the environment immediately without requiring an upgrade to access points or controller infrastructure.
Watch more on AOS8 via the Tech Field Day YouTube Channel.
Zero Touch Provisioning
Included in the move to a Mobility Master, is Aruba Zero Touch Provisioning which allows the Mobility Master to handle all configuration for controllers throughout the environment. Additionally, the previous requirement for the Mobility Controller and Access Controllers to be running the same version of code has been removed. The Mobility Master must run the latest code supported in the environment, but will be backwards compatible with older versions of code running on the controllers. This feature will greatly benefit risk adverse customers to quickly take advantage of the new features in administrative buildings, but maybe roll out slowly to a hospital or manufacturing site.
The Multizone architecture allows for SSIDs to terminate to multiple controllers, creating an end-to-end encrypted session from client to controller when in tunneled mode. Terminating SSIDs on different controllers extends beyond the data flow and into how the AP is managed. Controller 1, as the primary, gets to set all of the AP settings (IP address, dhcp, etc..). Controller 2 gets to set only the settings for SSID 2. An admin of controller 2 cannot see any of the info for controller 1 including SSIDs, security types, auth servers, users, etc.
Aruba AOS8 brings controller clustering to the table. All elements in the cluster must be running the same code and be part of the same family (e.g. All 72XXs running 8.0 code). State information is maintained for clients and access points with a designated backup controller within the cluster. The clusters also participate in user load balancing. Primary and Backup controller per user is maintained in the cluster and will be shared with AirWave later in the year. This is useful across all customer types, but especially those with very large campuses (e.g. higher education or Fortune 500 headquarters, etc.). Clusters scale to 12 controllers with 72XX series and 4 with 70XX controllers.
Aruba Clarity allows the access points to associate to another access point and run synthetic tests from the “client AP” to the Clarity server, effectively building a baseline and providing tremendous visibility especially for remote sites. Clarity Live tracks DHCP and DNS requests and responses in real-time to profile the typical health of the network. Clarity Synthetic allows for RF performance testing, iPerf, web page loads to a URL (Salesforce, etc.) Upcoming features that were hinted at but not confirmed include scheduling and wired line monitoring and testing.
Another feature of AOS8 is Aruba’s new AirMatch feature that enables better channel reuse. This feature is important as legacy radio management was designed for a previous era of wireless networks. In today’s high capacity world that needs to support users and things the old way of doing things is not good enough. AirMatch looks at the system as a whole to maximize channel reuse and capacity on a daily basis and determines based on a day of usage what the best wireless combination of radios include. Advanced users will be able to tune AirMatch functionality to meet their needs from the command line, but this will be obscured from the GUI to protect users from causing harm.
The Mobility Master will have the context aware APIs that exist with Aruba’s Location Engine (ALE) to enable integrations with other systems via REST or published to other resources using a ZeroMQ to move that data to a database. Configuration APIs have also been enabled to allow APIs configure the network, SSIDs, etc.
Enhancements have been added that enable categorization of applications and grouping of applications. For instance, a group called “Students” or “Nurses” could be created simplifying management. Custom applications are now supported and AppRF definitions are now treated like antivirus updates and can be updated without impact to the network.
In all I was impressed with what was announced for this release. Our delegate panel kept asking for more, but when you look at what has been accomplished, our requests were in line with what you’d expect this roadmap to look like as it unfolds. The shift to an API driven infrastructure is exactly where the world needs to be heading and abstracting software from hardware is inline with every other major shift in the industry. I am looking forward to the APs themselves running microservices in the future that can be upgraded, restarted, etc. with no impact to end users—it seems to be an inevitability at this point. This Mobile First Platform is well thought out and perfectly aligned with the automated and intelligent future that we are all looking for as it allows us to focus on the core business and offers much needed agility.
Simplifying network management is a challenging task for any organization, especially those that have chosen a best of breed route and have a mix of vendors. I ask my customers to strive for these things when looking to improve their network management and gain some efficiency.
- Strive for a Single Source of Truth—As an administrator there should be a single place that you manage information about a specific set of users or devices (e.g. Active Directory as the only user database). Everything else on the network should reference that source for its specific information. Multiple domains or maintaining a mix of LDAP and RADIUS users makes authentication complicated and arguably may make your organization less secure as maintaining these multiple sources is burdensome. Invest in doing one right and exclusively.
- Standardization—A tremendous amount of time savings can be found by eliminating one-off configurations/sites, situations, etc. An often overlooked part in this time savings is in consulting and contractor costs, the easier it is for an internal team to quickly identify a location, IDF, device, etc. the easier it will be for your hired guns as well. A system should be in place for IP address schemes, VLAN numbering, naming conventions, low voltage cabling, switch port usage, redundancy, etc.
- Configuration Management—Creating a plan for standardization is one thing, ensuring it gets executed is tougher. There are numerous tools that allow for template-based configuration or script-based configuration. If your organization is going to take the time to standardize the network, it is critical that it gets followed through on the configuration side. DevOps environments may turn to products like Chef, Puppet or Ansible to help with this sort of management.
- Auditing and Accountability—Being proactive about policing these efforts is important and to do that some sort of accountability needs to be in place. This should happen in change control meetings to ensure changes are well thought out and meet the design standards, safeguards are in place to ensure the right people are making the changes and that those changes can be tracked back to a specific person (no shared “admin” or “root” accounts!) to help ensure that all of the hard work put in to this point is actually maintained. New hires should be trained and indoctrinated in the system to ensure that they follow the process.
Following these steps will simplify the network, increase visibility, speed troubleshooting, and even help security. What steps have you taken in your environment to simplify network management? We’d love to hear it!
Network Management doesn’t have to be overly complex, but a clear understanding of what needs to be accomplished is important. In a previous blog series I had talked about the need for a tools team to help in this process, a cross functional team may be critical in defining these criteria.
- Determine What is Important—What is most important to your organization is likely different than that of your peers at other organizations, albeit somewhat similar in certain regards. Monitoring everything isn’t realistic and may not even be valuable if nothing is done with the data that is being collected. Zero in on the key metrics that define success and determine how to best monitor those.
- Break it Down into Manageable Pieces—Once you’ve determined what is important to the business, break that down into more manageable portions. For example if blazing fast website performance is needed for an eCommerce site, consider dividing this into network, server, services, and application monitoring components.
- Maintain an Open System—There is nothing worse than being locked into a solution that is inflexible. Leveraging APIs that can tie disparate systems together is critical in today’s IT environments. Strive for a single source of truth for each of your components and exchange that information via vendor integrations or APIs to make the system better as a whole.
- Invest in Understanding the Reporting—Make the tools work for you, a dashboard is simply not enough. Most of the enterprise tools out there today offer robust reporting capabilities, however these often go unimplemented.
- Review, Revise, Repeat—Monitoring is rarely a “set and forget” item, it should be in a constant state of improvement, integration, and evaluation to enable better visibility into the environment and the ability to deliver on key business values.
As network engineers, administrators, architects, and enthusiasts we are seeing a trend of relatively complicated devices that all strive to provide unparalleled visibility into the inner workings of applications or security. Inherent in these solutions is a level of complexity that challenges network monitoring tools, it seems that in many cases vendors are pitching proprietary tools that are capable of extracting the maximum amount of data out of a specific box. Just this afternoon I sat on a vendor call in which we were doing a technical deep dive of a next-generation firewall with a very robust feature set with a customer. Inevitably the pitch was made to consider a manager of managers that could consolidate all of this data into one location. While valuable in its own right for visibility, this perpetuates the problem of many “single panes of glass”.
I couldn’t help but think, what we really need is the ability to follow certain threads of information across many boxes, regardless of manufacturer—these threads could be things like application performance or flows, security policies, etc. Standards-based protocols and vendors that are open to working with others are ideal as it fosters the creation of ecosystems. Automation and orchestration tools offer this promise, but add on additional layers of intricacy in the requirements of knowing scripting languages, a willingness to work with open source platforms, etc.
Additionally, any time we seem to abstract a layer or simplify it, we lose something in the process—this is known as generation loss. Generation loss is the result of compounding this across many devices or layers of management tends to result in data that is incomplete or worse inaccurate, yet this is the data that we are intending to use to make our decisions.
Is it really too much to ask for simple and accurate? I believe this is where the art of simplicity comes into play. The challenge of creating an environment in which the simple is useful and obtainable requires creativity, attention to detail, and an understanding that no two environments are identical. In creating this environment, it is important to address what exactly will be made simple and by what means. With a clear understanding of the goals in mind, I believe it is possible to achieve these goals, but the decisions on equipment, management systems, vendors, partners, etc. need to be well thought through and the right amount of time and effort must be dedicated to it.