odcsales@opendatacenters.net | Sales: (212) 796-5502 | 24/7 Client Service: (212) 901-5500



Top 10 Key Takeaways from CAPRE’s Fourth Annual Greater Seattle & Pacific Northwest Data Center Summit

Here are the top 10 highlights consisting of interesting observations or predictions about the Seattle market, compiled by CapRE: https://www.capremedia.com/39118-2


WebSite Source Expands into Open Data Centers Piscataway Facility

As part of its continued expansion of the WebSite Source (“WSS”) business unit, 1stPoint Communications has begun migrating client hosting systems into Open Data Centers’ Piscataway, NJ data center. WebSite Source provides virtual private servers, cloud systems, domain name registration and ecommerce services to its hosting customers. It currently operates systems in a data center facility located in Dublin, OH. “By operating systems in more than one data center facility WSS will be able to offer fully redundant hosting solutions to its clients,” said Kristen Vasicek, Director of Marketing for 1stPoint Communications. Ms. Vasicek is jointly responsible for product development for the WSS business unit.

The Open Data Center’s Piscataway facility has a 2N electrical and mechanical design. More than half of the infrastructure of the facility can fail and the critical systems within the facility would be completely unaffected. This level of redundancy is ideal for the operations of the WSS business unit. As part of the migration WSS is deploying a new fleet of servers and network equipment, expanding its capacity by over one thousand percent. “Our strategic vision for the business unit necessitates the expansion of our network and our systems,” commented Erik Levitt, 1stPoint’s CEO. “We are committing capital and resources to enhance the underlying infrastructure and build a best-of-breed environment that will complement the new services we intend to offer to our clients.”

1stPoint acquired WebSite Source in June, 2016 and has since expanded its virtual private server and cloud offerings, advanced managed storage solutions and integration with a number of software providers for backup and recovery. “We are very excited about the deployment of new services that will integrate our advanced messaging products,” added Vasicek. “We anticipate the release of those services in the first quarter of 2017. They will continue to advance 1stPoint’s position as a leader in developing the new paradigm for infrastructure deployment throughout the next decade.”

About 1stPoint Communications

1stPoint Communications provides integrated messaging, voice, data and mobile service for small businesses, enterprises and carriers. 1stPoint is committed to delivering all of the services business need to interact with their customers, employees and suppliers, providing its clients a New Way to Work.


Investors: Asset Sales Reflect Telco Troubles, Not Colo Trends

HALF MOON BAY, Calif. – In 2011, telecom companies were major buyers of data center companies. Just four years later, they’re trying to sell many of the same data center assets. Is this a sign that colocation is hurting and the the party is over for the data center sector?

Investors and analysts say the moves are a reflection of trends in the telecom sector, and driven by the strong valuations of data center assets, rather than any worrisome slowdown in the colo and cloud business.

That’s the takeaway from several panels at the IMN Forum on Financing & Investing in Data Centers and Cloud Infrastructure, held last week at the Ritz Carlton Half Moon Bay...

For more, click here: http://datacenterfrontier.com/telco-data-center-sales/



Hindsight is 20/20: Lessons Learned from Sandy

Among the most important factors in finding a home for Information Technology infrastructure is location.  Hurricane Sandy, or ‘Super Storm Sandy’ as it is referred to in the New York Metro market, created many calamitous events throughout the region, especially those areas close to the water or in-land wooded areas and on flood plains.  While natural disasters are not something we can all plan for, it’s the lessons that only a 100 year storm can teach us.

First, let’s consider the location of your company’s data systems.  This is the equipment that is the core of your business. If it goes down, then the company is down.  Regardless of each end-user’s location, if they are unable to access the company’s data systems, they can’t perform their jobs effectively.  Finding the most suitable location could be challenging, particularly for those in regions with regular seismic activity. However, in the New York metro market, there are other concerns:  rain, wind and floods.

Second is connectivity.  Assuring diverse and redundant connectivity to your data communications infrastructure is a key element to your ability to access it, even if other areas are under duress.  Knowing the underlying providers, the path of the cable systems, and assuring both private and public access to your equipment can help your company stay connected. 

Third: power supply.  One of the most important factors in the selection of a facility where your data systems reside does is access to multiple, reliable primary power sources, and the availability of backup power systems.  Even with backup power available, backup generators only last as long as their fuel supply, and as we learned from Sandy, fuel supplies can be cut off.  This is of particular concern for facilities surrounded by water, for example, in Manhattan or on Long Island, NY.  The construction of the power systems at the facility are equally important.  In some cases the generators were well above ground, and the fuel sources were available, but the pumps were on the first floor and under water. 

It is very important that all of these facilities be in place before the disaster.  In the event of localized outages, the communications and utility companies can focus their efforts on individual outages.  In large scale events affecting hundreds of thousands or even potentially millions of people, you can not guarantee that the utility companies or the telecommunications carriers will have the resources to immediately attend to your needs.  They must first restore emergency services, then hospitals and other critical, life saving systems, and then finally, everyone else in order of relative priority.  In other words – you are on your own.  Whatever systems are functioning after the disaster might be what you have to work with for days, or weeks, or in some cases months if your entire office facilities were permanently damaged.

Open Data Centers’ facility at 15 Corporate Place South in Piscataway, NJ, is 16 miles from flood prone downtown Manhattan, was outside of the affected zone and well above sea level.  The facility has proven to remain accessible, especially to diesel trucks bringing fuel to keep generators (if required) on line and for personnel needing to access the facility. The location, just off of Route 287, is easy and convenient to get to and is served by diverse power grids and a wide selection of carriers and service providers.  The data center has true A/B power systems up to 120 watts per square foot and fully redundant electrical and mechanical systems at all levels, including generators, UPS and HVAC systems with 2N redundancy throughout the entire infrastructure.

Finally, there’s service. Depending on the data center, service availability, choice, and flexibility can vary greatly.  Typically your applications either fit into one of a company’s pre-set solution packages, or they  don’t. And often, there are services you would like to have but cannot be obtained  due to a company’s own limitations.  The advantage of a smaller, more personal data center such as Open Data Centers is its ability to be flexible, nimble and responsive to its customer needs – even if it doesn’t fit a specific mold.  For starters, the Piscataway facility has a 24-hour Network Operations Center staffed with qualified technical support.  Additionally, they offer services traditionally not available from the larger providers.  Low-cost, one-time only cross connect fees, flexible minimums for services and space, as well as competitive utility fees are some of the non-conventional services available from Open Data Centers.   And, they are always open to new ideas and ways to personally serve their clients’ unique business needs.

The upshot?  Open Data Centers has space available now.  All of the features and services required by the most discerning companies are possible.  To contact Open Data Centers, email odcsales@opendatacenters.net.

To learn more about Open Data Centers, LLC and it’s carrier-neutral, high availability data center in Piscataway, New Jersey, visit www.opendatacenters.net


How You Stand to Benefit from Data Center Convergence and Colocation

Fortune 500 companies have always stored mission-critical information in a data center. These companies’ decision makers are used to having their data reside outside the confines of their offices, sometimes in data centers they own and operate, and other times in outsourced facilities. But the same cannot be said of their small- to medium-sized business (SMB) counterparts, who are often hesitant to move their sensitive data off premises.

When it comes to data center convergence, the biggest concern  that exists in the minds of SMB decision makers is that  they will lose control over their data by housing it in an outsourced facility. They express genuine concerns over the ownership of their data, should they decide to choose a colocation facility.

In truth, however, the collocating party still maintains ownership of their data as stipulated in contracts they sign with data center providers.  So while it might initially be discomforting to lose daily contact with the systems themselves, the fact remains that businesses do not lose ownership or control of their data by opting to choose colocation or any other form of managed services as an option to meet their needs.

The Quest for Data Center Management

There are three factors to consider in analysis of whether or not to colocate data center facilities:

  • Space
  • Power
  • Telecommunications

Traditionally, space and power were available relatively cheaply, and telecommunications solutions were expensive. Over the past decade, however, this model has reversed, leading to the increase in businesses deciding to colocate their equipment rather than maintain it in house.  For example, why should a law firm in mid-town Manhattan, for example, pay a premium on office space to house its servers when it could instead store them at a data center in New Jersey and pay a lot less for space and power?  Environmentally those servers are probably protected by an off-line UPS system (meaning that they are exposed to the fluctuations in utility power such as brown-outs or spikes) with limited battery life.  In a data center, however, those same systems could have access to multiple power grids, with generator backup, and be protected by on-line UPS systems that smooth out the power delivered by the utility, dramatically reducing the mean time between failures.  Data centers with redundant power distribution allow users to take advantage of systems with dual power supplies.

When telecommunications services were expensive, it made sense to store your data onsite because the cost of the transmission of data was very high. But now the macroeconomic factors have adjusted themselves to the point where the cost/benefit makes sense. The price to transmit data has shrunk considerably, while the price to power and store equipment has increased.

Because of the geographic locations involved, space is generally two to three times more expensive in an office than in data center, according to Erik Levitt, CEO of Open Data Centers. A headquarters office will be conveniently located, where space is at a premium, while data centers are typically in places where there is more space. With this in mind, it doesn’t make much sense for companies to devote that square footage of their premium office to their data center when they could be filling that space with personnel.

A Paradigm Shift

According to an infographic created by Emerson Network Power, there are more than 500,000 data centers globally, and their popularity continues to increase. There’s been a paradigm shift in information technologies. It used to be considered best practice to keep 80 percent of your transmission local and 20 percent of your transmission remotely to prevent the possibility of data loss and limit cost.

“It no longer makes sense to house data outside of the data center,” explains Levitt. “Your data should always be as protected as possible.  Data centers provide a level of logical and physical security that can not be achieved in the context of office space. ”


With employees on hand at remote data centers 24/7, surveillance cameras monitoring the facilities, and access cards necessary to open doors, decision makers are quickly learning the security benefits associated with data center convergence. It’s imperative that your data is protected, and hiring a colocation provider to achieve a higher level of service.

“The notion that data centers are less secure than an office is realistic,” Levitt says,adding that most of the most sophisticated and secure organizations use colocation facilities.

Finding the Right Price Point

In this difficult economic climate, companies are constantly looking for cost-effective solutions that help them successfully navigate a complicated business landscape. Many times when a company is under-performing financially, the first reaction is to reduce their marketing budget, something which should be avoided at all costs, Levitt says. Research shows that such a cut can be detrimental to a company. According to study of 600 companies conducted by McGraw-Hill, businesses that didn’t cut their advertising spending during the recession of the early 1980s witnessed growth of 256 percent.

Instead, cut the IT budget. Our infrastructure has evolved to the point where anyone from the SMB to a large enterprise can find an IT solution that reduces cost and increases reliability. With options such as cloud computing, virtualization, or colocation as small as a quarter of a cabinet in a data center, there is an affordable solution for everyone.

Companies that choose to colocate their equipment to meet their needs get the extra benefit of not having to purchase and maintain expensive infrastructure. According to the aforementioned study, a server purchased in 2011 has 45 times more computing power than a similar server purchased in 2001. When a company chooses modern cloud computing options, for example, they stand to benefit from the use of the latest technologies without having to make the capital expenditures necessary to acquire them.

Why Choose Open Data Centers

Located in Piscataway, N.J., just 16 miles below New York City, Open Data Centers was founded in June 2012 and offers carrier neutral collocation services. The company is positioned to become the premier purveyor of data center convergence services in the tri-state area.

When it comes to choosing a data center, business owners should think of costs, reliability and access, and Open Data Centers is one of the less expensive facilities in which to operate.

“Our rates to our customers are extremely reasonable, and we have operating synergies that foster that,” Levitt says. “We’re less expensive, more reliable and easier to reach.”

To learn more about why Open Data Centers is the right data center facility for your business, click here.


The Realities of Disaster Preparedness

Imagine you are the CEO of a B2B marketing company. Business has been great—but then the unexpected happens.

All of a sudden, a fire breaks out in your office, you are unable to extinguish it quickly and your small data closet is destroyed. No one in the building was seriously injured, but your CIO needs to get treated at the hospital for complications stemming from smoke inhalation. Gradually, your phones start ringing and you know your email inbox would be flooded (if your email server was working) as your customers are frustrated by the fact that your systems are down and they can’t do business without support from your company.

You expect customers to be forgiving because there was a disaster.  Unfortunately, because the fire is a localized disaster only affecting your organization, they are not too sympathetic to your situation— because their customers aren’t going to forgive them.

Luckily, you have a disaster recovery plan in place—all of your data is backed up in the cloud. But you have yet to test it, and it turns out that your CIO is the only one in your company who knows the passwords to that data. Even then, it’s just raw data with no systems on which to run the applications to access it. Those now need to be rented or purchased, and the software licensing keys are in the CIO’s accounts online with the software providers.

All of a sudden you have that sinking feeling that it is going to be days—or even weeks—before your company is back online.

But by employing a best-in-class disaster recovery solution—one that is testable and easy to execute—business owners are able to navigate the unfamiliar waters that appear in the aftermath of a disaster with ease.

Two Kinds of Disasters

There are two kinds of disasters: those that are widespread and those that are localized.

In the case of a widespread disaster like Hurricane Sandy, customers are generally understanding because it is easy for them to see the results, and on occasion they experience the aftermath themselves. On the other hand, localized disasters—like the fire in the aforementioned example—can be arguably worse. When such disasters occur, your customers and competitors are likely unaffected so they have no expectation that your systems will be down. We would like to believe they would be sympathetic, but that sentiment will only last so long.

In either case, such disasters are unpredictable and unavoidable.

Many businesses don’t need disaster recovery plans.  They are either too small or their IT functions are not mission critical.  For those businesses that do need disaster recovery plans, we find that many of them are impractical to implement when a disaster occurs.

“A very high percentage of businesses don’t have good disaster recovery plans in place,” explains Erik Levitt, CEO of Open Data Centers. “Many businesses that have a regulatory or fiduciary responsibility to have a plan do have them in place, but may not necessarily have one that works.”

Recent research indicates that when companies do test their disaster recovery plans, 70 percent of them don’t pass their own tests. Because medium-sized businesses stand to lose an average of $12,500 per hour of downtime and the average cost of downtime for all businesses is $212,100, not having a strong disaster recovery plan in place can be financially devastating. What’s more, over the last five years, 73 percent of businesses have experienced unplanned downtime, so the odds favor such a situation occurring when a business least expects it.

Many businesses, Levitt continues, have plans in place that they have never tested or test rarely. Recent research indicates that 13 percent of companies haven’t tested their disaster recovery plans and 50 percent of them have tested them once or twice in a year. That is because it is difficult adequately test; not only must you initiate the plan, but you must also restore to normal operations afterward. Additionally, in a lot of traditional disaster recovery contracts, it costs money to invoke the plan even for a test.

Without having previously tested the plan that is in place, however, how can businesses know for certain that their data is secure and that their operations can seamlessly continue in the aftermath of a catastrophe?

Misconceptions about Disaster Recovery

A vast majority of business owners whose companies need disaster preparedness think that such misfortunes will never strike them directly.

“Maybe the mathematics are on their side,” Levitt says. “But I just don’t want to be the guy quoted in the newspaper saying that I never thought it would happen to my company. Protecting our business, customers and reputation is more valuable to us. That makes the mathematics irrelevant.”

Certainly, every company needs to weigh the risks associated with disaster recovery individually. But if a company ultimately decides that it needs such a plan, it should make certain the plan it selects is a strong one.

A good disaster recovery plan (and its underlying contracts) allows you to test your recovery plan so you have some experience with how to operate your business should a disaster occur.

“If you’re going to have a proper disaster plan, it must be simplistic to invoke and simplistic to restore to normal operations,” Levitt explains. “If neither of those are true, the likelihood that the plan is effective is very small, and the likelihood that it has been tested it is equally as small.”

Choosing the Right Disaster Recovery Plan

Many business owners like having their servers and other technical equipment onsite simply because they feel comfortable knowing that it is physically nearby. But thanks to the evolution of the telecommunications industry and the power of the cloud, such equipment can easily be maintained offsite. Storing equipment offsite in geographically diverse facilities offers an additional layer of protection while freeing up valuable office space.

With that thought in mind, Levitt says that the best way to avoid enduring a disaster is to not having the equipment located where those disasters are likely to occur. In his opinion, that means companies should use secure data centers as their primary sites, taking as many precautions as they possibly can.

“A traditional office space is far more likely to suffer from a disaster than a well situated data center,” he says. “Fire, theft, power outages and telecommunications issues are far less likely.”

Located 16 miles south of New York City, Open Data Centers’ Piscataway, N.J. facility—which was launched in June 2012—is one such facility business owners seeking a strong disaster preparedness plan should consider, Levitt says.

“We are not on a flood plain. The facility is far from the coast line, high above sea level and serviced by multiple power grids,” Levitt explains. “Any one failure cannot disrupt service to our data center customers. The building is fireproofed and we have a security staff onsite 24/7. We’re an ideal primary site.”

If you are a business owner looking to fortify an industry-leading disaster preparedness plan to future-proof and protect both your sensitive data and the strong relationships you maintain with your customers, click here to learn more about how Open Data Centers may be the right match for you.


The Cloud, Inside Out

Over the past several years, cloud computing has easily crossed the threshold, making the rapid transition from IT concept to reality and an often-used phrase that is now entrenched in the vocabulary of enterprises and consumers.  While there have been attempts to define the cloud, this has often proven difficult, as it can be argued that the cloud does not exist; at least not as the single, independent entity to which it is commonly referred. 

So if cloud isn’t a standalone unit, then what is it?  As IT increasingly moves more functions to the cloud, the definition continues to expand and evolve.  In 2009, in order to cut through some of the industry confusion, the National Institute of Standards and Technology (NIST) set forth their first definition of cloud computing, as follows: “cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g. networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.”*  NIST also outlined three service models (software, platform and infrastructure) and four deployment models (private, community, public, and hybrid). 


While there are always attempts at standardizing other cloud computing definitions and quibbling over what NIST has set forth, in particular regarding its well-defined service model which doesn’t easily accommodate the breadth of IT offerings evolving to as-a-service, what the NIST definition does do very well is clearly show the range of assets required in cloud computing and their dependency on one another.  It demonstrates that the cloud is not an amorphous, therefore undefinable, concept, but rather a set of highly-independent technological components which interact with each other to create a “cloud service.”


We often replace, at least on paper, components of an infrastructure that we do not feel the need to, nor seek to, understand.  Case in point, we often represent the Internet, or any large, complex public network (such as the PSTN) with a cloud. Now we are seeking to further replace the systems and software with a cloud.  In the past, we have sought to understand why one ISP is better than another ISP, or why one telephone company is better than another telephone company.  We did so by analyzing the components of the infrastructure that makes up that service provider.  And so we must do with cloud computing providers.

The cloud comprises infrastructure that stores and processes data, networks, an interface which is used to access information and resources, and, of course, somewhere for all this equipment to be based.  It is this last point that sometimes proves the most difficult to comprehend: that the cloud, actually lives somewhere that is tangible and real. The homewhere the cloud and all of its capabilities reside is data centers.  The data center offers the secure physical infrastructure and the access to high volume networksthat it needs to operate.    

Open Data Centers, a carrier-neutral data center operator in New Jersey and New York City, recently launched its first carrier-neutral facility in Piscataway, New Jersey.  The data center offers enterprises, service providers and carriers 10,000 feet of cost-effective, high quality space in the high-demand New Jersey and New York metropolitan area.

As more and more IT assets become cloud-based, and consumers and enterprises migrate daily functions  to cloud-based systems, it is becoming increasingly important to ensure the understanding of the different components that make up a cloud, and the critical role that the data center plays as the engine behind what drives cloud as we know it.

To learn more about Open Data Centers or to schedule a tour, please visit www.opendatacenters.net.



© 2019 Open Data Centers