Posts

5 Things You Don’t Know Are Killing Your WiFi

WiFi Performance

Bad WiFi service frustrates employees, hurts productivity, and can send customers to your competitors.  Even if you use your wireless access point (AP) vendor’s management tool, here are five (5) things that may be hurting your WiFi service quality without your knowledge:

  1. Network traffic actually transmitted over the air:
    APs know that they attempted to transmit  data to a client, but cannot detect if a malfunction prevented transmissions.  APs cannot detect their own transmission problems, such as dropped packets, chatter, and jitter.
  2. Clients consuming channel bandwidth that are not connected to your infrastructure:
    Not every device using channel bandwidth connects to your network. These devices often interfere with connected traffic, hurting performance for others.
  3. Misconfiguration within your infrastructure:
    APs cannot self-detect if they are configured improperly or if neighboring APs are creating interference. APs are not clients on the network, so they can only see what they transmit and what they receive.
  4. Clients connected to APs not managed by your AP controller:
    While your AP management tool may identify unmanaged or unauthorized APs on your network, they cannot detect or analyze clients connected to those APs and/or the impact these unmanaged devices have on your WiFi performance.
  5. Interference from devices and networks outside of your control:
    Vendor AP management tools are built to manage the vendor’s APs. These tools do not identify or analyze neighboring networks that interfere with yours. Bandwidth and channel conflicts go undetected and unresolved.

Your vendor AP manager misses these issues because your APs are not WiFi clients.

The best way to monitor and manage WiFi performance and reliability is to place a passive sensor client in your environment.  Unlike expensive WiFi assessments of the past, done by on-site technicians lugging around specially equipped computers and meters, innovative services like the Wyebot Wireless Intelligence Platform™ (WIP) give you a plug-and-go solution for about 1/10th the cost.  WIP is a vendor agnostic tool that can see and monitor your entire WiFi environment, analyze and prioritize issues with alerts, make knowledge-driven solution recommendations, and provide remote network testing tools.

Tools like Wyebot help you ensure your WiFi network best serves your business.


Please download our eBook, Understanding WiFi Quality, for more information, or contact us to arrange an initial WiFi Assessment.


 

WiFi Quality is About the User Experience

WiFi QualityAn ever increasing number of businesses are learning that WiFi is more than a convenient network connection.

  • Restaurants, bars, and coffee shops that want patrons to linger and spend more lose business when customers can’t check the score, answer an email, or scan their social apps.
  • When your mobile app doesn’t work in your establishment because of poor WiFi service quality, your patrons go elsewhere.
  • WiFi quality influences which conference rooms get booked, where teams choose to huddle, where individuals choose to sit and work, and where people choose to socialize.

WiFi service quality is becoming a competitive factor that can help or hurt your business.

Most network managers rely on vendor management tools to monitor and control their wireless Access Points (APs). These tools provide basic statistics on traffic volume and patterns.  The more sophisticated solutions provide cool looking color-coded heat mats that overlay WiFi signal strength onto blueprints of your business. Some tools even use APs to triangulate users’ locations within their business.

What vendor AP management tools do not show you, however, is the client experience. You can have great WiFi signal coverage, but applications time-out if client devices experience too much interference. Your network may be setup to support a high density of users, but if clients end AP-hopping for signal strength, management overhead can cripple performance.

To understand WiFi quality: Understand the user experience.

By definition, your Access Points are not and cannot be clients on your WiFi network. The data your APs gather represents only what goes in and out of (or is simulated by) each Access Point. WiFi clients will see your network performance and reliability differently than your APs.

Think of it this way.  A chef creates a new signature dish. The chef knows that she’s used the best, freshest ingredients. The chef has sampled dozens of variations to get the taste just right.  The chef believes that this her best new meal ever. Even so, a few, many, most, or all customers may not like the taste, texture, or presentation of the meal. Fortunately, WiFi quality and reliability is not subject to personal taste and preferences; WiFi service quality is determined by the client experience.

The only way to understand, monitor, and manage WiFi service quality is to monitor your network from a client.

Historically, this has meant expensive service engagements in which technicians bring in monitoring and analysis systems for a “point in time” assessment. These assessments, which can cost thousands of dollars and only capture one point in time, are beyond the budget of most small and midsize businesses and schools.

New solutions, however, provide vendor-agnostic analyses of your WiFi network using passive sensor WiFi clients, prioritize identification of service issues, and offer knowledge-driven recommended solutions.  With the Wyebot Wireless Intelligence Platform™ (WIP), for example, in most instances we can provide periodic WiFi Assessments for less than 1/10th the cost of a traditional assessments. Ongoing monitoring becomes affordable for nearly all businesses and schools, the the added value of historical data analysis, real-time alerts, and remote network testing.

If your business relies on WiFi, you can now afford to make sure your WiFi network is reliable and performs well.


For more information, download our eBook, Understanding WiFi Quality, or contact us about arranging an initial WiFi Assessment.


 

The Last Mile: Internet Access in the Age of Cloud

The Last Mile

Internet access has changed radically in the past half decade. With greater availability of broadband service from cable providers, small and midsize businesses are no longer limited by legacy wide area network technologies offered by traditional telephone providers. The cost of service has also plummeted.  In our area, we have gone from paying $500 per month for a 1.5 Mbps circuit to paying $149 per month for 75 Mbps service. From $330 per Mpbs down to $2 Mbps in less than five years. The impact is profound and has spurred changes on how we use the Internet. We have moved from surfing web sites and email traffic to cloud computing, creating a new set of challenges for small and midsize businesses. High speed internet is not readily available in many rural, sub-rural, and urban areas. High speed internet is often built over aging infrastructure and lacks reliability. And, most importantly …

Many Broadband Services Fail to Meet the Needs of Small Business

Most business broadband services are asymmetrical, with different upload and download speeds. With uploads running at 10%-15% of download speeds, broadband fails to meet the needs of cloud users. Working with cloud systems, applications, and file services, as much data moves “up” to the cloud as “down” to user. Symmetric upload/download speeds are critical to reliable performance and productivity.

Fortunately, Solutions Exist.

By looking to other carriers and their agency networks, we can offer solutions that delivery bandwidth, reliability, access, and coverage.

For bandwidth, many carriers offer Fast Ethernet, Gig Ethernet, and other high speed fiber and coax services. These services deliver symmetrical service with a range of speeds, usually starting at 100 Mbps. Availability is generally good in urban and suburban areas. For buildings not pre-wired for service, installation may involve pulling new wires from the street network. In most cases, carriers will waive this construction cost, along with normal installation fees, when you sign a three (3) year agreement.

For reliability, a second, fail-over, Internet connection can provide business continuity when your primary service fails. As the failures are often on the last mile — the connection from the network to your business — alternate service should not be built over the same infrastructure as your primary connection. For many small businesses, cellular can provide reliable, affordable fail-over services with reasonable speeds. Solutions like the Datto Network Appliance connect to your local provider and offer automatic fail-over to the Verizon or AT&T cellular data networks for a low monthly fee.

For access and coverage in areas without high speed Internet service, broadband satellite is emerging as a viable solution, particularly in rural and sub-rural areas.  Speeds start at 20 Mbps. Service may not be symmetrical everywhere, but coverage areas continue to grow.

The solution you need for business will depend on your location, size, and use of cloud services. Taking time and picking the right Internet access will improve performance and productivity.


If you are interested in exploring options, contact us for a free consultation.


 

 

When and Why Go VDI?

This blog post is a reprint of an article first published on Experts Exchange as part of a series on cloud strategies and issues for small and mid-size businesses.


VDI DiagramLike many organizations, your foray into cloud computing may have started with an ancillary or security service, like email spam and virus protection. For some, the first or second step into the cloud was moving email off-premise.  For others, a cloud-based CRM service was the first application in the cloud.

Currently, we see organizations rapidly moving file services and storage into cloud-based solutions as more marketing, sales, and line of business applications switch to Software-as-a-Service (“SaaS”) solutions. Often, this leaves you with a small set of business applications running on-premise.

What do you do with applications and services left on-premise when most of your systems have moved, or will be moving, to the cloud?

While you may wish to keep these legacy systems on-premise, you can move them into a cloud or hosted Virtual Desktop Infrastructure (VDI) environment. VDI environments provide a virtualized, remote desktop accessible via browser or “receiver” app.  When connected, users get their full desktop environment with access to local and network applications.

Some applications, such as computer-aided design (CAD) and manufacturing/process controls, are not well-suited for VDI. Most local and network applications work well within a VDI environment. VDI services typically charge based on processor load, memory, and allocated diskspace. Fees may also include standard office software, data backup services, malware protection, and other common network services.

Why use a VDI solution?

  • Improved secure accessibility to legacy application, particularly for remote and mobile users
  • Lower cost for IT infrastructure, especially when email, apps, and other services are moving to the cloud
  • Improved reliability and security, as VDI solutions run in professionally managed data centers
  • IT resources are free to work on higher value projects than maintaining core infrastructure and services
  • Lower cost and less administration of end user devices, as you can move to thin clients, chromebooks, and tablets as existing desktops and laptops need replacing

When to use a VDI Solution? 

For some small and mid-size enterprises, VDI solutions provide a means to “clean out the closet”, to simplify their IT solutions and walk away from endless maintenance and updates. For others, a VDI solution enables them to move legacy systems and applications to a cloud-based environment.

When considering a VDI service for legacy applications and systems, answering a few basic questions while help you determine if your “when” is “now”.

  • Is the application is available in a Software-as-a-Service (Saas) subscription?
  • Does the application have custom modules or code that would prevent running the SaaS version?
  • Are application requirements — processor power, memory, disk space — known and understood?
  • How many users need access to the application?
  • How many users receive reports or data from the application?
  • What connections or integrations exist between local/network applications?

With answers to these questions, you can scope the size and configuration of your VDI environment. You can also assess if the benefits, and the costs, of a VDI solution outweigh the costs and effort required to maintain the systems on-premise.

As you move applications and services to the cloud, you will likely reach a point where you no longer have the critical mass necessary for on-premise servers to be the most economical and effective solution. When you reach this tipping point, a VDI solution will provide a secure home for your systems, your business, and your budget.

Wide Area Benefits from Going Cloud

Simplification String
Most of the businesses, nonprofits, and local governments we help move to the cloud see both tangible and intangible benefits shortly after deployment. Whether they focus on improved availability and reliability, easier secure access to files, or lower capital expenditures, or the benefits of improved collaboration and access to video conferencing services, very few businesses regret the move.

Many organizations, however, do not look beyond the scope of their cloud implementation for other, indirect or subsequent benefits. Cloud migrations often create opportunities for additional IT simplification that can improve the users’ experience as well as further lower costs.

Most common across our customer base is the ability to simplify wide area networks. Organizations with multiple locations rely on wide area networks to connect offices, servers, and people. We see several common architectures, each with limitations.

  • Centralized servers require all users not at the server location to access data remotely, at lower speeds.
  • Distributed servers provide performance, but require more complicated backup solutions and/or data synchronization.
  • Spoke and hub networks connect all sites, typically over leased/dedicated lines.  Bandwidth between sites is limited and relatively expensive, with a single path (or, hopefully, redundant paths) to the Internet.
  • MPLS (Multi-Protocol Layer Switch) networks provide a managed network, better security, and greater Internet bandwidth, but still rely on leased/dedicated bandwidth to the carrier.
  • Lan-to-Lan and PC-to-LAN VPNs can securely connect machines and sites over private or public lines, but VPN services add overhead that hurts performance, increases admin costs, and makes it more difficult for users to connect.

When files and other data are centrally located in the cloud, you can simplify your wide area networks and lower costs.

Because your data is centrally located, you may no longer need point-to-point connections between your offices. Replacing point-to-point, VPN, and MPLS links with Direct Internet Access Links can have up to a 100X benefit, as many carriers can provide you with up to 10x the speed at as little as 1/10th the cost. At these price points, building in redundancy is affordable and can protect your businesses from carrier outages.

As you move to the cloud, reassess the role of your wide area and Internet links. Simplification and modification of your architecture can save you time, money, and overhead, while providing faster, more reliable service.


If you would like to review your network for opportunities, or discuss the potential benefits from moving to the cloud, please contact us for a no-obligation discussion.


 

Moving to the Cloud: Provider Reliability

 

Green_GaugeThis post is the third in a series addressing concerns organizations may have that prevent them from moving the cloud-based solutions.

One of the challenges in planning a move to the cloud remains the relative youth of the current industry.  While the concept of cloud computing is not new (tip your hat to Control Data in the 1980’s and their mainframe time-sharing service), most cloud computing services are relatively new.  Even services from long-standing, reliable vendors — like IBM and Dell — are relatively new ventures for these firms and have yet to be proven in a long-term market.

Organizations looking at any cloud service, be it SaaS, PaaS, or IaaS, must consider the reliability of the provider.  In doing so, it is the customer that must also understand the benchmarks being used by vendors when reporting their statistics.  Considerations include:

  • What is the availability of the service?  How well does the service provider meet their Service Level Agreement (SLA) benchmarks in terms of total downtime and/or service disruptions?
  • What is the reliability of the service?  How often does the service experience issues?  While most organizations tout availability, 6 disruptions lasting 10 minutes may have more impact on your operations than a single hour-long disruption.
  • Does the provider have performance benchmarks?  If so, how well does the provider meet the benchmarks?  In moving to the service provider, what expectations/needs will you have with respect to WiFi capacity, fixed network performance, and Internet capacity?   In many cases, the limiting factor on end-user performance is not the service provider or the Internet speed — it is the organization’s internal wired and wireless capacity.
  • What level of support do you expect?  Understanding how the provider delivers support — directly or through resellers/partners — is key to an organization’s long-term satisfaction with the service.
  • Does the vendor have the financial stability for the long-term?  With the number of start-ups in the cloud space, this factor may be the most difficult to ascertain.  Looking at the company’s financials, funding levels, and profitability can provide some insight.  Assessing whether the provider would be a good buy-out or merger target can also instill confidence that your provider will not go away unexpectedly.

With a modicum of due diligence, organizations can assess the reliability of cloud solution providers before making a commitment.  Reputable vendors will openly share their data and will not hesitate to discuss failures and how similar events will be prevented going forward.  And while, this type of discussion feels new, it is the same process CIOs and IT decision makers have been using for decades as they evaluate new technologies and vendors.  The players are new, but the process remains the same.

Next Post in the Series:  Privacy

Previous Post in the Series:  Moving to the Cloud: Cost Savings

 

Friday Thought: All Outages are Not Equal

Last week Google Docs experienced an outage lasting about 30 minutes.  Almost immediately, the “reconsider the cloud” articles and blogs began to appear.   Articles like this one on Ars Technica, immediately lump the Google Docs outage with other cloud outages, including Amazon’s outage earlier this year and the on-going problems with Microsoft’s BPOS and Office365 services.

And well no outages are good, they are not all the same.  In most cases, the nature of the outages and their impact reflect the nature of the architecture and the service provider.

  • The Google Docs outage was caused by a memory error and was exposed by an update.  Google acknowledged the error and resolved the issue in under 45 minutes.
  • Amazon’s outage was a network failure that took an entire data center off-line.  Customer that signed up for redundancy were not impacted.
  • Microsoft’s flurry of outages, including a 6 hour outage that took Microsoft almost 90 minutes to fully acknowledge, appear to be related to DNS, load, and other operational issues.

Why is it important to understand the cause and nature of the outage?  With this understanding, you can provide rational comparisons between cloud and in-house systems and between vendors.

Every piece of software has bugs and some bugs are more serious than others.  Google’s architecture enables Google to roll forward and roll back changes rapidly across their entire infrastructure.  The fact that a problem was identified and corrected in under an hour is evidence of the effectiveness of their operations and architecture.

To compare Google to in-house systems, Microsoft releases bug fixes and updates monthly which generally require server reboots.  Depending on the size and use of each server (file/print, Exchange, etc), multiple reboots may be necessary and reboots can run well over an hour.  In the last two years, over 50% of all “patch Tuesday” releases have been followed up with updates, emergency patches, or hot-fixes with the recommendation of immediate action.  Fixing a bug in one of Microsoft’s releases can take from hours to days.  Comparatively, under an hour is not so shabby.

When looking across cloud vendors, the nature of the outage is also important.  Amazon customers that chose not to pay extra for redundancy knowingly assumed a small risk that their systems could become unavailable due to a large error or event.  Just like any IT decision, each business must make a cost/benefit analysis.

Customers should understand the level of redundancy provided with their service and the extra costs involved to ensure better availability.

The most troubling of the cloud outages are Microsoft’s.  Why?  Because the causes appear to relate to an inability to manage a high-volume, multi-tenant infrastructure.  Just like you cannot watch TV without electricity, you cannot run online services (or much of anything on a computer) without DNS.  That Microsoft continues to struggle with DNS, routing, and other operational issues leads me to believe that their infrastructure lacks the architecture and operating procedures to prove reliable.

Should cloud outages make us wary? Yes and no.  Yes to the extent that customers should understand what they are buying with a cloud solution — not just features and functions, but ecosystem.  No, to the extent that when put in perspective, cloud solutions are still generally proving more reliable and available than in-house systems.

 

 

Tuesday Take-Away: The True Role of the SLA

As you look towards cloud solutions for more cost effective applications, infrastructure, or services, you are going to hear (and learn) a lot about Service Level Agreements, or SLAs.  Much of what you will hear is a big debate about the value of SLAs and what SLAs offer you, the customer.

Unfortunately, the some vendors are framing the value of their SLAs based on the compensation customers receive when the vendor fails to meet their service level commitments.  The best example of this attitude is Microsoft’s comparison of its cash payouts to Google’s SLA that provides free days of service.  Microsoft touts its cash refunds as a better response to failure.  Why any company would send out a marketing message that begins with “When we fail …” is beyond me.  But, that is a subject for another post someday.

That said, Microsoft and its customers that are comforted by the compensation, are totally missing the point of the SLA in the first place.  Any compensation for excessive downtime is irrelevant with respect to the actual cost and impact on your business.  And unless a vendor is failing miserably and often, the compensation itself is not going to change the vendor’s track record.

The true rule of the SLA is to communicate the vendor’s commitment to providing you with service that meets defined expectations for Performance, Availability, and Reliability (PAR).  The SLA should also communicate how the vendor defines and sets priorities for problems and how they will respond based on those priorities.  A good SLA will set expectations and define the method of measuring if those expectations are met.

Continuing with the Microsoft and Google example.  Microsoft sets an expectation that you will have downtime.  While the downtime is normally scheduled in advance, it may not be.  Google, in contrast, sets an expectation that you should have no downtime, ever.   The details follow.

Microsoft’s SLA is typical in that it excludes maintenance windows, periods of time the system will be unavailable for scheduled or emergency maintenance.  While Microsoft does not schedule these windows at a regular weekly or monthly time frame, they do promise to give you reasonable notice for maintenance windows.  The SLA, however, allows Microsoft to declare emergency maintenance windows with little or no maintenance.

In August 2010, Microsoft’s BPOS service had 6 emergency maintenance windows, totaling more than 10 hours, in response to customers losing connectivity to the service, along with 30 hours of scheduled maintenance windows.  In line with Microsoft’s SLA, customers experienced more than 40 hours of downtime that month, which is within the boundaries of the SLA and its expectations.  On August 17, 2011, Microsoft experienced a data center failure that resulted in loss of Exchange access for its Office365 customers in North America for as long a five hours.  The system was down for 90 minutes before Microsoft acknowledged this as an outage.

Google’s SLA sets and expectation for system availability 24x7x365, with no scheduled downtime for maintenance and no emergency maintenance windows.

The difference in SLAs sets a very different expectation and makes a statement about how each vendor builds, manages, and provides the services you pay for.

When comparing SLAs, understand the role of maintenance windows and other “exceptions” that give the vendor an out.  Also, look at the following.

  • Definitions for critical, important, normal, and low priority issues
  • Initial response times for issues based on priority level
  • Target time to repair for issues based on priority level
  • Methods of communicating system status and health
  • Methods of informing customers of issues and actions/results

Remember, if you need to use the compensation clause, your vendor has already failed.

 

 

 

Webcasts

Next Normal: Apps & Servers

3T@3 Webcast Series: Tuesday, Mar 16th at 3:00 PM

COVID-19 and the events of the past year have, and continue, to change the way we run our businesses.  While some of these changes are temporary, many will become part of our next normal. For many of us, these changes came in a scramble to work from home.

What IT changes best position your business for the future?

This month’s 3T@3 Webcast, is the second in our “Next Normal” series looking at how we adapt, prepare, and respond to economic, social, and business changes.  

With “Apps and Systems”, we explore how your team accesses the applications, systems, and data they need to succeed, whether in the office or working remotely. We will compare the pros and cons of on-site systems, hosted servers, and cloud solutions with respect to performance, availability, reliability, and security. In doing so, we will discuss options and roadmaps for modernizing your apps and systems infrastructure. 

Watch the recording on-demand



Data Protection & Security