Written by Shaun Neal

IT Strategist and thought leader with sophisticated technical skills and a passion for resolving complex problems or business challenges through innovation. Adept at maintaining focus, achieving bottom-line results while formulating and implementing advanced technologies. Capable of positioning and delivering business solutions to meet a diversity of customer’s needs.

I had an opportunity to attend a Mobility BU hosted training at Cisco HQ in Santa Clara. This training covered Hyperlocation, Connected Mobile Experience (CMX) and the Enterprise Mobility Services Platform (EMSP). I had been looking forward to this ever since I received the invite, having invested time into the solution as early as 2013. These technologies are unified in purpose in that each of them have a role to play in transforming the end-user experience and enabling businesses to engage with their customers in new and interesting ways.
Hyperlocation
As one of the Wireless Field Day 8 delegates, I had an opportunity to see the Hyperlocation Module (HALO) up close and personal, however we never got a chance to actually play with it. For those interested, I wrote a detailed blog post about the technology after the WFD8 event. This time around however, we not only got to spend time talking through the technology and its use cases, we actually spent time playing with it in the CMX Lab at Cisco HQ. Seeing hyperlocation in action is impressive and the accuracy was within one meter as advertised. While the location accuracy is great, what is really intriguing is the network is aware of where the user is rather than relying on the user to interact with a beacon or something similar. I had the opportunity to walk around the floor space with an iPhone6+ and watch its movement on the screen. The response was impressively crisp for being 100% Wi-Fi based, but not quite as smooth as beacon-based movement tracking. This distinction is important though as beacons do require a user to be using their app to adequately engage, where as hyperlocation is simply the network being aware of the device and its movement inherently.
Detect. Connect. Engage.
Cisco’s CMX software works by detecting the presence of a device on the wireless network. Presence is simply the device being local to a given access point, it does not necessitate location, however location is an option and can be accomplished through standard triangulation or by the addition of the HALO module. Connection is the process of getting the user to opt-in through captive portal, SMS, social media, or mobile app. Some organizations are challenged with mobile app adoption so alternatives are a welcome addition. Lastly once the user is connected, engaging with them in new and innovative ways is the goal of the platform.

My Connected Mobile Experience (CMX)
Playing with CMX at the Cisco lab was fantastic—we walked around with various devices ranging from phones to Ava the telepresence robot who drove herself around the lab. Our movements generated a ton of data for CMX which we could then use to send notifications, trigger an action, etc. The reports and analytics offered around these actions are simple to navigate and provide powerful insights for organizations.

Enterprise Mobility Services Platform (EMSP)
EMSP is an open, cloud-hosted mobile application platform which provides an intelligent way to deliver customer engagement and is used with CMX to leverage location based services. Upon location acquisition of customer, EMSP wifi-enabled, browser-based captive portal provides a mobile experience specific to the location of the mobile device user, who they are and what they’re doing. EMSP then provides event-based, actionable insights which enable improved monetization and conversion of customer from looking to buying, from general presence to engaged interaction. In addition, the EMSP solution includes a tool suite for rapidly and dynamically updating content for the context-aware mobile experience. With this in mind, EMSP simplifies and accelerates time to deployment. It has the intelligent hooks to act upon the insights provided by CMX location services to improve the client experience, influence behavior, solicit feedback and automate workflow.

My Bluetooth World day one started with a great conversation over breakfast as I presented on the need and opportunity for innovation in healthcare using Bluetooth enabled solutions. Our group opened up and had some fantastic discussion around some of the barriers that are currently challenging this industry such as limited numbers of Bluetooth radios being integrated into medical device solutions for connectivity. We progressed to discussion on all of the possible use cases as well as the opportunity for the data from an IoT-enabled world of healthcare to create new use cases as we better understand interactions between machines and humans.
The keynote speeches and individual presentations had great information, I was most interested in the direction of Bluetooth and the features that are coming shortly, especially the improvements to the meshing capabilities and range as these will open the door for great new use cases.
Also of personal interest was Kiyo Kubo’s talk about Bluetooth LE at Levi’s Stadium and the pain of getting to where it is today. Kiyo had gone through all of the challenges around Apple reducing their probing rates to almost nil and randomization of MAC addresses in the probing frames, forcing a change over to Bluetooth. They then had to develop a number of tools to make it a success both from an initial deployment and long term manageability.
The Expo floor had a wide variety of use cases from BLE managed LED lighting that synced with car audio to IoT-enabled hearing aids that would use location and ambient sound to automatically adjust their sound levels and noise filtration via a cloud interface.

The WLAN Pros Conference is truly a unique experience that I look forward to all year long. Throughout the year we are inundated with vendor marketing material and embroiled in competition. WLPC is a few days where we can come together as individuals, educate each other, build the community and challenge each other to be better at our craft. This year’s conference will be in sunny Phoenix, AZ. Read more about it here. If you’ve never been before and you have an interest in Wi-Fi I urge you to make plans to attend. It is a great opportunity to network and learn from others in the field.
This environment provides a great opportunity to get up and speak about something you are passionate about. The mix of longer presentations and ten talks allow for a lot of variety and depth of topics. This year I’ve selected a topic on Healthcare wireless as my main presentation topic and then will use a Ten Talk slot to provide a sneak peak into my Bluetooth World presentation that I will be giving in March at Levi’s Stadium.

I’ve had the opportunity over the past couple of years to work with a large customer of mine on a refresh of their entire infrastructure. Network management tools were one of the last pieces to be addressed as emphasis had been on legacy hardware first and the direction for management tools had not been established. This mini-series will highlight this company’s journey and the problems solved, insights gained, as well as unresolved issues that still need addressing in the future. Hopefully this help other companies or individuals going through the process. Topics will include discovery around types of tools, how they are being used, who uses them and for what purpose, their fit within the organization, and lastly what more they leave to be desired.


If you’e followed the series this far, you’ve seen a progression through a series of tools being rolled out. My hope is that this last post in the series spawns some discussion around tools that are needed in the market and features or functionality that is needed. these are the top three things that we are looking at next.
Event Correlation
The organization acquired Splunk to correlate events happening at machine level throughout the organization, but this is far from fully implemented and will likely be the next big focus. The goal is to integrate everything from clients to manufacturing equipment to networking to find information that will help the business run better and experience fewer outages and/or issues as well as increase security. Machine data is being collected to learn about errors in the manufacturing process as early as possible. This error detection allows for on the fly identification of faulty machinery and enables quicker response time. This decreases the amount of bad product and waste as a result, improving overall profitability. I still believe there is much more to be gained here in terms of user experience, proactive notifications, etc.
Software Defined X
Looking to continue move into the software defined world for networking, compute, storage, etc. These offerings vary greatly and the decision to go down a specific path shouldn’t be taken lightly by an organization. In our case here we are looking to simplify network management across a very large organization and do so in such a way that we are enabling not only IT work flows, but for other business units as well. This will likely be OpenFlow based and start with the R&D use cases. Organizationally IT has now set standards in place that all future equipment must support OpenFlow as part of the SDN readiness initiative.
Software defined storage is another area of interest as it reduces the dependency on any one particular hardware type and allows for ease of provisioning anywhere. The ideal use case again is for R&D teams as they develop new product. Products that will likely lead here are those that are pure software and open, evaluation has not really begun in this area yet.

DevOps on Demand
IT getting a handle on the infrastructure needed to support R&D teams was only the beginning of the desired end state. One of the loftiest goals is to create an on-demand lab environment that provides compute, store and network on demand in a secure fashion as well as provide intelligent request monitoring and departmental bill back. We’ve been looking into Puppet Labs, Chef, and others but do not have a firm answer here yet. This is a relatively new space for me personally and I would be very interested in further discussion around how people have been successful in this space.
Lastly, I’d just like to thank the Thwack Community for participation throughout this blog series. Your input is what makes this valuable to me and increases learning opportunities for anyone reading.

I’ve had the opportunity over the past couple of years to work with a large customer of mine on a refresh of their entire infrastructure. Network management tools were one of the last pieces to be addressed as emphasis had been on legacy hardware first and the direction for management tools had not been established. This mini-series will highlight this company’s journey and the problems solved, insights gained, as well as unresolved issues that still need addressing in the future. Hopefully this help other companies or individuals going through the process. Topics will include discovery around types of tools, how they are being used, who uses them and for what purpose, their fit within the organization, and lastly what more they leave to be desired.

Blog Series
After months of rolling out new tools and provisioning the right levels of access, we started to see positive changes within the organization.
Growing Pains
Some amount of growing pains were to be expected and this was certainly no exception. Breaking bad habits developed over time is a challenge, however the team worked to hold each other accountable and began to build the tools into their daily routines. New procedures for rolling out equipment included integration with monitoring tools and testing to ensure data was being logged and reported on properly. The team made a concerted effort to ensure that previously deployed devices were populated into the system and spent some time clearing out retired devices. Deployments weren’t perfect at first and a few steps were skipped, however the team developed deployment and decommission checklists to help ensure the proper steps were being met. Some of the deployment checklist items included things that would be expected: IP addressing, SNMP strings, AAA configuration, change control submission, etc. while others were somewhat less obvious – placing inventory tags on devices, recording serial numbers, etc. We also noticed that communications between team members started to change as discussions were starting from a place in which individuals were better informed.
Reducing the Shadow
After the “growing pains” period, we were pleased to see that the tools were becoming part of every day activities for core teams. The increased knowledge led to some interesting discussions around optimizing locations for specific purposes and helped shed some light on regular pain points within the organization. For this particular customer, the R&D teams have “labs” all over the place which could place undue stress on the network infrastructure. The “Shadow IT” that had been an issue before could now be better understood. In turn, IT made an offer to manage the infrastructure in trade for giving them what they wanted. This became a win-win for both groups and has fundamentally changed the business for the better. In my opinion, this is the single best change the company experienced. Reduction in role of “Shadow IT” and migrating those services to the official IT infrastructure group created far better awareness and supportability. As an added benefit, budgets are being realigned with additional funding shifted to IT who has taken on this increased role. There is definitely still some learning that needs to be done here, but the progress thus far has been great.
Training for Adoption
Adoption seemed slow for help desk and some of the ancillary teams who weren’t used to these tools and we wanted to better understand why. After working with the staff to understand the limited use it became apparent that although some operational training had been done, training for adoption had not. A well-designed training-for-adoption strategy can make the difference between success and failure of a new workflow or technology change.The process isn’t just providing users with technical knowledge, but rather to build buy-in, ensure efficiency, and create business alignment. It is important to evaluate how the technology initiative will help improve your organization. Part of the strategy should include an evaluation plan to measure results against those organizational outcomes, such as efficiency, collaboration, and customer satisfaction (this could be internal business units or outward facing customers).

The following are tips that my company lives by to help ensure that users embrace new technology to advance the organization:
Communicate the big-picture goals in relevant terms. To senior management or technology leaders, the need for new technology may be self-evident. To end-users, the change can seem arbitrary. However, all stakeholders share common interests such as improving efficiency or patient care. Yet, users may resist a new workflow system—unless the project team can illustrate how the system will help them better serve patients and save time.

Invest properly in planning and resources for user adoption. If an organization is making a significant investment in new systems, investing in the end-user experience is imperative to fully realize the value of the technology. However, training for user adoption often is an afterthought in major technology project planning. Furthermore, it is easy to underestimate the hours required for communications, workshops and working sessions.

Anticipate cultural barriers to adoption. Training should be customized to your corporate culture. In some organizations, for instance, time-strapped users may assume that they can learn new technology “on the fly.” Others rely on online training as a foundation for in-person instruction. Administrators may face competing mandates from management, while users may have concerns about coverage while they are attending training. A strong project sponsor and operational champions can help anticipate and overcome these barriers, and advise on the training formats that will be most effective.

Provide training timed to technology implementation. Another common mistake is to provide generic training long before users actually experience the new system, or in the midst of go-live, where it becomes chaotic. Both scenarios pose challenges. Train too early and, by the time you go “live,” users forget how they are supposed to use the technology and may be inclined to use it as little as possible If you wait for go-live, staff may be overwhelmed by their fears and anxieties, and may have already developed resistance to change. The ideal approach will depend on each facility’s context and dependencies. However, staggering training, delivering complex training based on scenarios, addressing fears in advance, and allowing for practice time, are all key success factors.

Provide customized training based on real-life scenarios. Bridging the gap between the technology and the user experience is a critical dimension of training and one that some technology vendors tend to overlook in favor of training around features and functionality. Train with real-life scenarios, incorporating various technologies integrated into “day in the life” of an end user or staff member. By focusing on real-world practice, this comprehensive training helps overcome the “fear of the new” as users realizes the benefits of the new technology.

Create thoughtful metrics around adoption. Another hiccup in effective adoption occurs when companies do not have realistic metrics, evaluation, and remediation plans. Without these tools, how do you ensure training goals are met—and, perhaps more importantly, correct processes when they are not? Recommend an ongoing evaluation plan that covers go-live as well as one to six months out.

Don’t ignore post-implementation planning. Contrary to popular perception, training and adoption do not end when the new system goes live. In fact, training professionals find that post-implementation support is an important area for ensuring ongoing user adoption.