Like a Phoenix rising … So does this blog.

unnamedSimply put Life Gets in the Way. However no excuses though, I should be writing more.

My hope is that this blog becomes more of a dialog and exploration of ideas rather than just my view. I do not have all the answers. I do have a lot of questions. And I believe the only way we get to answers and great ideas is by open communication, collaboration and contribution  – the 3 Cs. I also believe in that in a “large sample size” is needed to validate hypothesis , and that my sample size is small.

Let me start with a quick Bio of myself to show all my journey past , present and future.

Manny Rodriguez-Perez

Manuel “Manny” Rodriguez-Perez is a Digital Business Lead and technology strategist at Dell EMC. Manny draws on 20 years of Enterprise IT experience to advise customers on how new technology, organizational transformation, and IT governance can enable business initiatives. He has a patent on secure Smart Grid / IoT infrastructure from his work at a leading utility company. Manny has led the infrastructure service delivery practice at a Fortune 200 and has architected a multi-data datacenter hybrid cloud solution to support ITaaS at a global security company. He also developed the initial Sarbanes-Oxley computing controls for a public company and successfully coordinated multiple audits. Manny’s current areas of focus are next generation platforms to support digital business and applying operations management methodology to improve IT delivery.



OSCON 2015 – Lean In!


There is a reason why all the cools kids are gone next week…It’s OSCON 2015

Held in Portland July 20 – July 24  – OSCON is an O’Reily conference for the discussion of free and open source software. And in its 16th year – yes you heard me correctly  – 16th year! –  it appears to be the event that can help us in more traditional IT landscapes lean into the shift to microservices, open community development, DevOps, and chickens. The content is intriguing – ranging from practical , to fun,  to  esoteric, and plain sublime. Just check out these sessions ….


Using Docker to simplify distributed systems development
_@user_76644John Hugg (VoltDB)
4:10pm–4:50pm Thursday, 07/23/2015
Foundations Portland 256                                                                           “This session is designed for developers tasked with building distributed systems. I will explore how using Docker to isolate processes in a clustered environment simplifies distributed system development and debugging processes.”

Decorating drones: Using drones to delve deeper into intermediate Python
_@user_46744 Matt Harrison (MetaSnake) @__mharrison__
9:00am–12:30pm Monday, 07/20/2015
Foundations Portland 252                                                                                  “ … build your knowledge and cover some of the more exciting aspects of Python that tend to bite new Python programmers. The teaching will be combined with drone programming — we will use the constructs to help us program a drone.”

 A general theory of reactivity
_@user_133238 Kris Kowal (Uber) @kriskowal
5:00pm–5:40pm Wednesday, 07/22/2015
Foundations D135/136
“…  In this talk I will explain a General Theory of Reactivity, which will train you how to choose the right asynchronous tool for the job, covering Streams, Promises, Tasks, Futures, Observables, and more.”

Open source design: A love story
_@user_197658 Una Kravets (IBM Design) @una
10:40am–11:20am Thursday, 07/23/2015
Craft E146                                                                                                     “When designers and developers work together from the start, it produces better outcomes. But how can we get designers involved and wanting to participate in the open source community from the start? In order to figure out how to fix it, we need to take a look at the barriers for designers (why they don’t participate in open source), and how we can work together to influence change.”

A full schedule for your viewing pleasure is at
If OSCON 2015 doesn’t validate the growing importance of open source,  DevOps, and code culture – then I don’t know what else will. And some of our fellow EMC & Pivotal brothers and sisters will be presenting as they blaze new ground…


Create beautiful dashboards from many sources of data using open technologies
_@user_196137Jonas Rosland (EMC), Kate Greenough (EMC) @jonasrosland
10:40am–11:20am Wednesday, 07/22/2015
Data Portland 252

Microservices with Spring Cloud and Netflix OSS
_@user_123755Spencer Gibb (Pivotal)
9:00am–12:30pm Tuesday, 07/21/2015
Architecture Portland 251

Build your first Internet of Things app today with open source                                        Fred Melo (Pivotal), William Oliveira (Pivotal) @fredmelo_br , @william_markito              9:00am–12:30pm Tuesday, 07/21/2015
Events, Sponsored E142

Scalable graph analysis with Apache Giraph and Spark GraphX
_@user_151691Roman Shaposhnik (Pivotal Inc.)
2:30pm–3:10pm Thursday, 07/23/2015
Data Portland 252


So go ahead – tune in on Monday July 20 2015 9am Pacific – and “LEAN IN”


Check out more lore here —

XtremIO (or AFA) + Standardized Compute vs Exadata – A Perspective on Risk Management

 I recently was fortunate enough to learn a lot about Exadata while researching Data Warehouse infrastructure solutions with copy management & orchestration. I want to share more detail on my perspective regarding operational risk management of deploying a known traditional stack such as XtremIO (or other AFA) + Standardized Compute vs Exadata. We start off with the position that deploying either solution for oracle workloads has operational risk and unknowns. Here is my thinking/questions:

  • If both solutions have unknowns – which to me equals risk – then the question is how do we qualify the unknowns to see where the most risk lies. I built the table below as a way for me to think through the potential risks along the  IT operational focus areas that I think are important to most. This is my thought process entirely and I am open to constructive criticism.
  • Here are a few lingering questions that I still have:
    • Compatibility – Btw Infra and Application/Platform – I can see how the XtremIO (or other AFA) + Standardized Compute can be a concern due to vendor interop/compatibility issues and effort in tuning – however I see the Exadata as a much riskier solution given that there are strict requirements for DBMS version and potentially application code. And for those that would say that an engineered solution could be a “hammer” to force application/DBMS lifecycle, I would say in my experience  these “hammers” are effective only when compliance requirements with financial penalties like Sarbanes-Oxley GCCs are the driver. To me there is much larger unknown in how we will manage application to DB to infra code dependencies in the future on the Exadata vs what we could be doing today in compatibility management.
    • Patching/Code Upgrades – From what I have read the patch process for Exadata requires maintenance at three layers with three different tools – DB Hosts (Firmware / OS & Oracle GI / RDBMS) , InfiniBand Network, and Storage Nodes.  I cannot confirm from Oracle if these are bundled into one single patch or if each is applied separately at the different levels. It appears from Oracle’s own documents and several blogs that the rolling patch process at a minimum involves quarterly updates at  3 different levels, could take up to 2 hrs per storage node (14hrs+ on ½ rack), and requires good Linux CLI skill sets?  HOWEVER I need your help verifying. See website links below for the references I found. Note – To manage this complexity, Oracle has a tool called OPLAN that helps you with the patch process by summarizing the different patch strategies available. When you made your choice, Oplan tells you exactly what commands to execute. This will limit errors and reduce the time it takes to prepare. generating step-by-step instructions telling you how to apply a bundle patch in your environment. Still have to patch separately and run commands manually.
    • Patching/Code Upgrades – If the Exadata is a single infra to platform stack, would we need an additional Exadata for patch and more importantly application testing?  Unless you considering OVM – which to my understanding is not widely deployed – is there a way to partition a test environment on the production frame? And even if so would the interaction btw OVM, Flash Cache, and Storage cells warrant the need for testing a separate test Exadata system?

One additional observation – We have strived to abstract platforms/applications from infrastructure for the last 5-8 yrs through OS virtualization bc we see great value in the flexibility while still achieving enough standardization. This trend will accelerate in the coming years as containers further abstract the platforms/applications from infrastructure by removing the dependencies on OSs. By marrying the infrastructure/OS with the platform under one code matrix and HW/SW architecture, the Exadata goes in the complete opposite direction to where the industry is going.

Risk in Unknowns
IT Operational Areas XtremIO (or other AFA) + Standardized Compute Exadata X5


The ins/outs of capacity and cost scalability are pretty well understood across both platforms. Risk is dependent on the ratio of standardized compute to Exadata X86 CPU. Depending on the initial vs long-term needs standardized compute may be slightly higher due to Oracle licensing model which assumes you can turn off Exadata cores, however how practical this si in the long run is unknown.



Both solutions provide known and well established availability architectures.

Compatibility – Btw Infra Components

Low to Moderate

We all have experienced compatibility issues btw components from multiple vendors the infra stack. If you have preferred vendors that are partners of each other and have extensive joint support and escalation agreements then you will be in better shape. IBM, Oracle, and EMC come to mind.


Exadata provides an engineered solution with a fixed support matrix delivered completely by Oracle. Our expectation is that there is very low risk btw infra components in the stack.

Compatibility – Btw Infra and Application/Platform


Some have experienced compatibility issues with heterogeneous vendor stacks that require experienced resources and committed vendors to resolve. Additionally there is a fair amount of abstraction btw infra and platform layers so that supportability is at least good.

Moderate to Major

Exadata will dictate strict compliance btw Exadata code levels and the Oracle RDMS version required by applications. Exadata marries the platform layer to the infrastructure/os layer and spreads the DB code throughout the storage, i.e. the Storage FW is also DBMS code?

Patching/Code Upgrades


Known support and processes. Small patch test environments can be spun up virtualization or smaller footprints to isolate new code from production and from infrastructure. Most common storage AFA software is relatively easy to install and full regression testing is usually done with multiple vendor stacks.


Patching relies on multi step process and LUNIX CLI skill set?  Requires separate Exadata system to test patches? Exadata could require more patches than traditional stacks – quarterly x 3?

Vendor Involvement / Escalation


Most large enterprise vendors have known and expected support and escalation processes. I am assuming that large enterprise vendors have the focused Enterprise account teams (Sales, Presales, and Support) needed to escalate and resolve issue.

Based on the analysis above I would say that deploying XtremIO (or other AFA) + Standardized Compute has less risk/unknown than deploying Exadata in an environment that has some maturity on operating / managing the XtremIO (or other AFA) + Standardized Compute stack

I welcome your comments on perspectives I might have missed. 


I’m back!!!…My new passion…Data Scientists, Advanced Analytics, and this BIG DATA thing!

Finally got to watch the movie Moneyball, which I have been meaning to for a while. Have you seen it? If not definitely take some time to watch. It is the story of Oakland A’s general manager Billy Beane’s successful attempt to put together a baseball club on a budget by employing computer-generated analysis to draft his players. Based on the book “Moneyball: The Art of Winning an Unfair Game”. The movie gets you pumped up about the potential of analytics and big data. Potentially a good conversation tool when breaking the ice on analytics discussions.

Want to Keep Your Cloud Data Private? Ship it to Argentina

Forrester Research has come up with a cool interactive map that shows the current status of data privacy laws in countries called the Interactive Data Protection Heat Map. It heats up the debate of cloud privacy by showing a that US data privacy is on par with Russia. Even more disturbing is the “warning: symbol indicating possible government surveillance. If you want more privacy you better copy your data to Argentina or Germany, Okay Che!

Vendor Risk Sharing – An Option for Private Cloud Eslasticity

True wide-spread public cloud adoption by large enterprises will take some time. First the underlining service level and security concerns will need to be addressed. In the meantime, companies can work towards building expertise in cloud technologies and realize immediate benefits by building private IaaS clouds or evolving their current infrastructure to be more like a service. One of the key benefits of IaaS is the ability to scale infrastructure up and down fairly quickly improving agility and time to market, not to mention delighting internal customers. The goal here is to give internal customers the perception of almost infinite elasticity and build services that are close to on-demand. However, achieving this goal will require internal staff keep more infrastructure inventory on hand than they are used to, a proposition that goes against some of the cost benefits of IaaS. Some are meeting this challenge with off premise or external clouds and “bursting”.  In bursting you procure additional compute resources from an IaaS provider, typically for a short period of time,  while you add capacity to your existing private IaaS. Most of the conventional wisdom in this space is moving towards internal private IaaS coupled with external private IaaS bursting to meet elasticity demands. However there is another option that can help meet the inventory/capacity challenges; technology vendors can share the inventory risk by providing hardware and software upfront at close to zero cost and charging only when its put in service. HP had a similar model with their Intel server hardware a few years ago. They would provision a full rack of servers in your data center and charge you, per server, as you turned them on. There were never any delays due to HW provisioning. I refer to this as vendor inventory risk sharing and see it as a viable option to enabling elasticity and agility.

Enterprise Cloud Adoption – A Need for Serious WAN Optimization

One topic I don’t hear a lot about is how you get large of amounts of data into the cloud and between clouds. Maybe the assumption is that enterprises will ultimately migrate to “Everything as a cloud” (EaaC); presentation, computing, AND data in the cloud. Consequently, it is understandable how all the discussion is around SaaS, custom thin clients with PaaS, or applications on IaaS. However, wide enterprise adoption of the cloud will rely on the flexibility to use different cloud technologies and providers for different portions of the application and infrastructure.

In the long run enterprises will need a mix and match strategy driven by the financial, security, and compliance constraints.  For instance, cloud architectures should support having your data on site in your data center and the compute and presentation layer in the cloud. Cloud architectures should also support the ability to have your data and your compute on different clouds. A critical requirement of this flexibility is the ability to move potentially large amounts of data to and from clouds, since your data and compute resources may be in different data centers. Ultimately, cost-effective network connections are needed. However, typically the bandwidth needed in-between components is a constant and can only be optimized so much in code. In the end, the only option is to optimize the traffic in the network between the components. This is where WAN optimization technologies and products come in.

WAN optimization technologies and products will use compression, caching, and de-duplication to reduce the actual bits going across the inter cloud network allowing enterprises to deploy smaller bandwidth networks and save operating costs. Without a serious offering of WAN optimization technologies and products large-scale enterprise cloud adoption will fail.

Unleash your inner cloud with

An interesting play on IaaS for training, demonstrations and development, provides you an environment of up to 6 servers on their cloud for free. The key word here is free. You can choose from multiple OS and enterprise software, including Oracle on Windows, CentOs , Xubunto, and MS SQL 2008. Setup is blazing fast, I built a test bed of 3 CentOs servers and a Windows workstation in under 20 minutes.  Environments can also be shared with peers and customers through email.  They have a paid enterprise offering also for those that need more features. Enjoy!

IPad App for the Sick (Lazy) Parent

A few weeks ago I wasn’t feeling too well, tied with a scratchy throat. It was bedtime story time for my boys and my little one picks out Dr Seuss’ book “One Fish Two Fish Red Fish Blue Fish”. I looked upon him in despair. For those who have read the book, you know it is 63 pages long, not quite what I had in mind for a bed time story. What to do?

In comes one of the best apps I have found to date for the IPad, the Dr Seuss’ “The Cat in the Hat” interactive book. Not only are the graphics and transitions beautifully simple (imagine Ken Burns effects on scans of the the original book pages) , the book reads to your children! Yes I said READS to your children. The kids can also press on the images and the e-book says the names. Press on Sally and the e-book says “Sally!”, press on the fish and the e-book says “fish”. Triple press the fish and the e-book say “fish fish fish”. Its very responsive and amusing.

Long are the days of stressing my voice. Now, I just click on the “Read to Me” button and sit back, one kid on each side, and enjoy that cough drop.

AMI Data Center Infrastructure Delivery

Recently started thinking about my experiences in the past year related to the delivery of IT infrastructure for Automated Meter Infrastructure (AMI )projects. A few major themes come to mind:

  • Importance of time to market as the industry is still in flux
  • Extremely dynamic requirements
  • “We don’t know what we don’t know“

This is one area in IT where the traditional deployment schedule of 10-18 months does not work. Design, procure and deploy must be done in a few months. Additionally, experimentation and rapid innovation are critical as   new apps/functionality are being developed/deployed weekly.

The constraints above will flex data center infrastructure processes. Every process from provisioning to life cycle will have to be accelerated. Add to this the projected scale of data generated by AMI initiatives and you got what I call a “game changer”.  So how do you handle the agility and scale constraints when deploying AMI data center infrastructure? Focus on quick delivery followed by a strong, formal optimization stage. It’s what I have termed the “AMI infrastructure Quick Delivery Methodology” and it looks like this:

  1. Design and Procure
    – Typically 3-5 months
    – Largely based on estimates & “fuzzy” benchmarks
  2. Deploy / Pilot
  3. Gather Metrics / Assess Performance
    – Do we have gaps?
    – Did we meet our design requirements?
  4. Optimize / Plan
    – Mitigate issues
    – Add Capacity , resize
    – Determine 1 year capacity plan

This methodology is more balanced towards agility and delivery rather than short run cost optimization. However, the benefits from shorter time to market of new business functionality will trump the extra cost.

The challenge in IT infrastructure will be to manage the scale, complexity and cost and still be flexible and agile. How do you manage applications with 100 TB of primary storage and 5 TB of daily change, and still deploy effective backup solutions?  How do I share information or give access to other systems at this scale? How do I manage cost/value perception (I can get 500GB at CompUSA for $50!)? Ultimately, the answers lies in new technologies, new procurement processes, and new organizational structures. A short list of action items to come…