Focus on Excellence

Technically, API stands for Application Programming Interface. Many companies have built APIs for their customers or for internal use.

We at Mindviser successfully help companies build APIs as a great method of integrating systems.

Let’s take a look at some of the most common challenges and pitfalls you may encounter when building out an API COE.

What Are REST APIs?

Considered by many programmers to be the ideal solution to design networked applications, REST APIs – or to be precise, Representational State Transfer Application Protocol Interfaces – were originally communicated by Roy Fielding for his doctoral dissertation at the University of California at Irvine. This solution sets constraints in the process of designing applications. To quote Roy Fielding, REST APIs “rely on a stateless, client-server, cacheable communications protocol – and in virtually all cases, the HTTP protocol is used”.

But even though this solution allows for several advantages – not least of which is that it provides a universal platform upon which several users can communicate – the REST API does provide some challenges. These challenges are especially evident if you’re using a REST API to build out a COE, or a common operating environment.

Challenge 1: Setting Up

Setting up a COE requires not only forethought but a big-picture mindset. Who will the key contributors to this platform be? What language will best be used? roper communication is essential to keep a company working at its best, so the end-result is of utmost importance. Without a proper method of communication, productivity will decrease and, on a more basic level, if the COE is done in a language that’s not universally accepted among all employees, it could cause a problem.

Proposed Solution

The best practices of COE build-out using REST APIs indicate that the contributors should come from diverse sectors within the company, since consultants may not be familiar enough with the organization’s practices to be effective. At the very minimum, a COE should have both a Chief Guidance Officer and a clear-cut objective within the framework of business operations. Most of all, there needs to be a specific scope within the framework. If the scope is too broad, the COE will be counterproductive to the workers and to the company.

Challenge 2: Building Out Standards

Another typical challenge of setting up a COE is creating and implementing standards to guide future production. A remarkable number of organizations fail to implement government techniques that assure procedures are both understood and being observed.

Proposed Solutions

The task of compiling and integrating reports, performing assessments, and defining guidelines requires a large investment of manpower and, most importantly, time to accomplish – time that can easily be wasted if these procedures fail to be followed immediately after implementation.

Challenge 3: Implementing With Multiple Users & Providers

Finally, one of the biggest challenges of using a REST API to create a company-wide COE is when the implementation needs to include not just multiple users, but multiple service providers. This can lead to a communication problem, because while multiple users shouldn’t have a problem with understanding the company-wide communication verbiage, that can’t be said for multiple service providers.

Proposed Solution

By creating a COE with a universally understood and accepted vernacular that’s free of any unprofessional slang or verbiage, implementing the COE with multiple users and providers shouldn’t be a problem.

In short, while REST APIs certainly have their challenges with implementation, the solutions are just as simple to implement.

If you want to integrate technology solutions for your business, contact us for more details.

Benefits of Using a CouchDB Update Handler Vs Using Native CouchDB Update APIs

If you are still using the Native CouchDB Update APIs, the CouchDB Update Handler will impress you with its performance. Three times more efficient than some of the de facto approaches, it provides a highly performant mechanisms to bulk updates, especially when you don’t have the _rev id handy.

Before getting into the details of how it works and why you need it, let’s get comfortable with couch:

What is CouchDB?

CouchDB is one of the best document-oriented databases to replace the MySQL. It offers its users a variety of exotic features: Eliminating the application layer procedures, it allows users to handle the data directly.

Access Handling in CouchDB

CouchDB has a huge fan following because it has none of the ‘Read Locks’ that developers might have faced with MySQL. Therefore, if you have multiple users accessing the database simultaneously, CouchDB can offer them quick help.

Features of CouchDB

  • The database is easily replicable across multiple servers.
  • The document level indexing offers fast and error-free retrieval.
  • The new document creation, update, retrieval and deletion is done with REST calls, causing no Read locks.

Update Handling

  • Each document has a unique identifier to read, write, update and perform other operations.
  • Document IDs are case-sensitive strings – so make sure you enter them correctly.
  • If two documents have same Document ID, they will be considered to be the same.

So, looks like you’ve started liking the way CouchDB handles your data on a multi-node CouchDB Cluster due to the utmost safety it offers.

You should. Communicating with your data is very easy on CouchDB. Whether it is retrieval, storage or update, all operation data is returned in JavaScript Object Notation (json) format.

Using Native CouchDB Update APIs

An important question now:

Have you ever been in a situation where you need to put a change on all of the CouchDB documents?

If so, you surely know how time-consuming it is to perform the ‘insert’ operations. CouchDB native update APIs offer a foolproof method of updating a document, although you need to know the revision or document ID. In the event you don’t, it would be necessary to read every revision one by one and then update them.

CouchDB HTTP APIs

Updating an existing document can be achieved with a PUT request. As each document in CouchDB is a JSON object, it is identified with the revision number stored as _rev. While you are trying to retrieve a particular revision, you must know the _rev otherwise your query will end up with 409 Error. If the revision number in the PUT command does not match what’s in the database, a new revision number is generated, which is then returned.

Well, with the CouchDB HTTP APIs you need to first:

  1. Fetch the latest document revision (by passing on the Document ID).
  2. If the process was successful, update the descriptions.
  3. Push the updated document back to the CouchDB database.

Doesn’t this sound like a performance challenge? Let’s check:

Performance of Native APIs

We are using 3 HTTP calls to the database here. Whether or not the document is present in the database, the computation resources being utilized remain the same. Fetching, updating and then saving the updated version is an old school way of performing an ‘insert’ – to say the least! Hence, we obviously need a more efficient method for it.

Using CouchDB Update Handler

Update handlers are appropriate for performing an update action on a document. They are the unconventional method of updating the document that speeds up the process and saves resources.

The update handlers can be explicitly called to update a document description. No longer do we need to first retrieve the design document with document ID, update the changes and save it back to the database, and keep on doing this until we update all the documents. Update handler can make the changes to document description within the server itself.

Performance of Update Handlers

With CouchDB update Handlers, the web can now update descriptions by sending a POST request to document. With the document ID and the details of whatever description field needs to update, it will result in single HTTP call for the updates required. Therefore, this method is 3 times faster and efficient than the former (native API) method.

How does the Update Handler work?

For the update request, the update handler will include the document ID in the URL. The syntax for the URL will be:

https://{host}/{database}/_design/{handler}/_update/upsert/{id} 

This will be a REST PUT call, which requires no searching operation to be performed. The server responds to this call with the most recent version of that document ID.

What if you do not know the document ID?

No problem: as described above, you will have to send two parameters in the URL. So are you stuck when the Document ID is not known to you?

Of course not!

As you used the PUT command to invoke the update handler by providing the document ID, you can use POST command to use it without the document ID. The syntax, in this case, will be:

PUT command - /<database>/_design/<design>/_update/<function>/<docid>
POST command- /<database>/_design/<design>/_update/<function>

Effectively, it will also execute the Update handler with 1 HTTP call, in order to complete the update. REST PUT call will be like:

https://{host}/{database}/_design/{handler}/_update/upsert/{id}


For example:

PUT https://no-host.mindviser.io/my_product_info/_design/yehf/_update/upsert/123456 

Then, in the body of the post, you would post the contents of the json file, i.e.:

{"_id": "123456", "doc_type":"shopper_delight","shopper_delight": {"contact_id":234876,"phone_nbr":"7075551234","category":"consumer","department":"health","last_update_timestmp":"2018-08-24T04:59:52.00"}}

This content can be used in multiple ways as needed.

Feature Comparison – The Advantages of Using Update Handler over Native Update APIs

The benefits of Using CouchDB Update Handler Vs Using Native CouchDB Update APIs are numerous. They include:

Increased Operation Speed

In a database connected to the website(s) or application(s), the live updates take place very frequently. If you are using the normal native API to perform these operations, the query will take 300% of the time that it should take through Update Handler API. As operations are generally in hundreds or even thousands, it will cause a drastic decrease in update speed and thereby in performance. But if you are using Update Handler, your system will run very much faster, right?

Optimal use of resources

In the case of Update Handler, only 1 HTTP call happens. That means that far fewer resources are being occupied or consumed than in the case of native APIs.

Usability of fetched data

The returned data is in the .json format. JSON can be converted into array or string with ease. Hence, the automated operations or sending of this data to multiple other applications can be done without facing any major obstacle.

Easier Procedure

Instead of involving a Native API in the call, Update Handler makes use of the conventional POST method. That means you just need to send two parameters in the POST. How tough can it be?

Security 

Update Handler makes use of REST API while making the HTTP call. If you are using OAuth, hashing, HTTPS (Secure HTTP, which is already made ‘almost essential’ by search engines nowadays) and taking care of other basic aspects, you can rest assured of the highest level of security.

In Summary

The CouchDB Update Handler features greater efficiency and ease of use across the board – you can tell we’re excited about it, and it’s with good reason. If you make the switch, the new product may take a little getting used to, but we are here to help. Having trouble using the CouchDB Update Handler or still confused about whether to switch or not? Feel free to contact us and we’ll do our best to help you out.

The Benefits of Clear Requirements for IT Solution Delivery

Twenty-five percent of all technology projects fail, while a further 20-25 percent don’t show a return on investment, according to research. Unworkable timescales, ambiguous guidelines, a lack of clarity — these are just some of the reasons for IT implementation failure. As businesses rush new products to market, IT solution teams often grapple with unclear requirements and struggle with quality assurance processes. The result? Inferior products and disgruntled customers. Below, I’ve outlined some of the benefits of a good requirement management strategy.

Higher Quality Products

Proper requirements gathering — where IT solution teams collect a statement of key objectives, important product information, specifications and design constraints — helps developers create higher quality products. They can deliver systems with fewer defects — which results in higher levels of satisfaction from stakeholders, consumers, and users.

Yet many businesses still don’t have adequate requirements processes in place. One in three companies admits they are not doing enough to grow skills and competency in requirements definition and management, and a whopping 90 percent realize their current requirements practices are just not good enough. I think brands should not only provide IT solution teams with in-depth information and guidelines but also collaborate with them through the entire product production process.

Shorter Delivery Times

Research shows that inadequate requirements specifications and changes in requirements are the top two factors that contribute to failed system development contracts. Businesses that utilize clear guidelines at every stage of the IT lifecycle — from initial concept through to development and distribution — allow IT solution teams to focus on their objectives. Explicit requirements help these teams get products to market in a shorter time frame and speed up delivery times.

The most successful IT implementation projects are those where businesses and IT solutions teams understand each others’ objectives and concerns, and exchange information and ideas throughout the process.

Lower Costs

Requirements management lowers costs during the IT development process. Once IT solution teams have a clear, comprehensive set of guidelines to work with, they can develop systems with fewer unnecessary features, and reduce defects and rework. They no longer have to make estimates and assumptions when building systems, either.

Good requirements management reduces product development costs by up to three times and boosts profit margins by 230 percent, according to research. These statistics should encourage more businesses to invest in requirement definition and management when working with IT solution teams.

Meet Business Expectations

IT solution teams with clear requirements meet business expectations, increasing their chances of working with the same organization again in the future. They have a sufficient flow of information at all times — and can contact the organization if they have questions or concerns — and can meet the needs of the client appropriately.

Fifty-eight percent of projects that fail do so because of a lack of alignment, according to a study. Those projects were not correctly aligned to the client’s organizational strategy, which often results in project failure. It makes sense, therefore, for business managers and IT solution teams to meet before a new project and brainstorm ideas.

IT solution teams who gather the correct requirements from managers are more likely to succeed. They reduce development costs, shorten product delivery times and produce higher quality products. Going forward, all business managers should endeavor to make their intentions, goals, and objectives clear to avoid IT implementation failure.

Not Just Numbers: Driving Social Good with Data

If knowledge is power, then data should an engine for making the world a better place. Finding good news about technology can be hard in a world full of cyber threats and shrinking privacy. It’s out there, though, and here’s the proof. Businesses and organizations of all sizes really are working to put data science to work on solving large scale social challenges.

From identifying biases that impact minorities and women to innovative ways of showing people how to lead healthier lives, these creative uses of data are turning business intelligence into the new superhero of social movements.

The following organizations represent the advance guard of those transforming data into philanthropy.

Safety Nets Come in All Sizes
When you think about what people need in the developing world, you probably think of water, food, shelter, medicine, etc. What you probably don’t think about is low-income insurance. This once traditional industry is waking up the need for applying data science to improving the lives of people living under dangerous conditions. MicroEnsure has already paid out more than $28 million in claims and pledged to serve at least five million more low-income households by 2020.

Outwitting Bias in Hiring
HR managers are pulling in more data from social media into their hiring practices, but that’s not necessarily helping to bring more women and minorities into the business world. Big data can do a better job of blinding the natural hiring bias, as long as businesses follow the best practices laid out at the EEOC’s panel on big data in the workplace. Marko Mrkonich of Littler Mendelson told the forum that the latest data-driven software “expands the applicant pool beyond those who even apply…. Big data, used correctly, eliminates discrimination in many of its most egregious forms.”

Freedom from Debt Traps
More than half the US is struggling with financial problems due to subprime credit. Lack of credit drives people into debt traps like pawn shops and payday loans. This not only makes life harder, it has a substantial dampening effect on the national spending. LendUp is one of the new breed of FinTech providers using big data analytics to help people improve their credit and open up more life choices for tens of millions of Americans.

Improving Health and Bolstering Resilience
If video games made kids healthier, there might be no public health crisis in the country. That kind of thinking was exactly what drove the makers of HopeLab to build a video game that fights cancer. Since that success, they have broadened their mission to build apps that improve public health based on data science.

Connecting the Data Points
There are many social programs in need of digital guidance and many technologists who want to do more with their skills, but bringing them together on projects where they can sync up the strengths has not been easy. DataKind is at the forefront of organizations that are bringing data science needs and resources into alignment.

The Next Wave
My own experiences in helping those in need as part of Living Water International changed how I view the world in ways I never imagined. I still carry a rock that came from a well near Saba, Honduras, as a reminder of what we can accomplish when we work together. Looking around, I can now see that there are endless ways to bring data into the social responsibility sphere and no limit to the number of non-profits in need of advanced business intelligence. Now is a great time to get involved, but there’s no time to waste. Data and technology have made communication and coordination far easier than any time in history. All that’s missing is the willpower to turn data into meaningful answers.

Sources:

https://microensure.com/
https://www.shrm.org/hr-today/news/hr-news/pages/eeoc-panel-looks-at-implications-of-big-data-in-workplace.aspx
https://medium.com/@LendUp/fintech-for-social-good-looking-back-at-why-we-started-lendup-while-looking-forward-to-2017-1bf62b97da5
http://www.hopelab.org/portfolio/

Let’s Talk Message Queuing Products

In the current technological environment, many systems and/or applications are built on cloud architecture. For better operations, these applications are developed as smaller blocks with a dedicated function and then brought into sync with the help of message queuing architecture. Niche applications are becoming more and more available, however at the same time they are heavily relied upon, often support critical business functions, real time or near real time capabilities, and solve major problems. They are also expected to integrate extensively with third-party applications.

Message queuing products help with this.  

These products are essentially a communication link between these decoupled applications. Acting as temporary message storage, these products bring in coordination and performance enhancement with all the messages being handled in a sequential order. The messages transferred over the Messaging Queue products are actually a work order for the next application that keeps on propagating the tasks further in a more reliable fashion. Message Queuing functions are continuing to evolve and progress, so it’s worth a quick review of their architecture, and then taking a step back to evaluate some of them based on their merits.

Message Queue Architecture

Message queue product architecture begins with the part of the application that generates the messages, called producers. The subsequent applications the receive the messages are known as consumers, which take the “work order” and act upon it. The message queues are capable of storing the message temporarily if the sequential application is busy or not connected. Message queue architecture makes the components independent and simplifies the coding of decoupled applications. 

Message queuing as a service

If you are already working in the cloud, and looking to implement a messaging capability, most cloud providers provide some sort of messaging as a service.  This can be helpful if you don’t want to roll your own, and it’s all managed for you. That said, this can lead to provider lock-in, depending on what they have implemented, so make sure to proceed with caution.  If that is not a concern for you, then also consider capacity, throttling, throughput, etc. Messaging as a service can be a great option, but each situation has conditions that should be evaluated for proper fit.

Running your own solution – Leading Message Queuing Products

Let’s take a look at a list of some of the leading, successful message queuing products available right now.

1)    RabbitMQ

RabbitMQ has been widely accepted, largely due to its open-source status. The code is minimalistic and serves as a perfect message queuing product for both small and large-scale implementations. It comes across with multiple messaging protocols that are asynchronous. RabbitMQ runs smoothly across multiple operating systems and offers support to cloud environments.

What protocol does RabbitMQ Support?

RabbitMQ supports a wide range of messaging protocols. Either the code directly supports all these protocols or offers plugins. The core of RabbitMQ was coded to support AMQP a binary protocol. With the newer versions of AMQP, RabbitMQ has extended its support. STOMP is a messaging protocol based on text and is the only protocol that can be used by over telnet, a client-server protocol. RabbitMQ plugin supports all versions of STOMP. MQTT protocol receives support for lightweight subscription messaging and messaging semantics from RabbitMQ. AMQP 1.0 is one of the complex protocols supported by RabbitMQ plugins. For HTTP, RabbitMQ can transmit messages even when it is not a messaging protocol.

What standards do they conform to?

RabbitMQ conforms to AMQP (Advanced Message Queuing Protocol).

Synchronous vs. asynchronous support

RabbitMQ supports asynchronous messaging, which means the applications do not have to wait for a real-time response.

Configuration Considerations for High Availability clusters

While RabbitMQ can be deployed in distributed and federated configurations, it can handle complex message queuing needs very well.

What types of plugins/connectors are supported?

RabbitMQ supports the following plugins and connectors: rabbitmq_amqp1_0, rabbitmq_auth_backend_ldap, rabbitmq_auth_mechanism_ssl, rabbitmq_consistent_hash_exchange, rabbitmq_federation, rabbitmq_federation_management, rabbitmq_management, rabbitmq_management_agent, rabbitmq_mqtt, rabbitmq_shovel, rabbitmq_shovel_management, rabbitmq_stomp, rabbitmq_tracing, rabbitmq_trust_store, rabbitmq_web_stomp, rabbitmq_web_mqtt, rabbitmq_web_stomp_examples, rabbitmq_web_mqtt_examples

2)    Apache ActiveMQ

Apache ActiveMQ is an open source message queuing product developed in Java and integrated with JMS client. Along with enterprise features, it supports multiple cross language clients or servers.

What protocols do they support?

Apache ActiveMQ supports 10 protocols for multiple wire levels offering extensive interoperability. The core code of Apache ActiveMQ was coded to support the OASIS standard for AMQP binary protocol, although the newer versions of Apache ActiveMQ 5.13.0 support wire format protocol detection for the Auto protocol. MQTT protocol is an M2M publish messaging transport that is supported by Apache ActiveMQ and helps in the automatic mapping of JMS and MQTT client. OpenWire is a cross-language wire protocol supported by Apache ActiveMQ. The OpenWire protocol facilitates access to ActiveMQ from a set of varied languages and multiple platforms. Apache ActiveMQ maps REST protocol to JMS. Apart from these, it supports RSS, Atom, Stomp, WSIF, WS notification and XMPP.

What standards do they conform to?

All JAVA standards are supported by Apache ActiveMQ. J2EE, JMS 1.0 and JMS 2.0 are all offered along with extensive support.

Synchronous vs. asynchronous support

By default, Apache ActiveMQ supports synchronous message sending for all persistent messages. It is possible to enable asynchronous message sending, however, by following these steps:

  • Set up useAsyncSend property on the ActiveMQConnectionFactory.
  • Set up the property via the URI.

Configuration Considerations for High Availability clusters

Apache ActiveMQ can be deployed in distributed and federated configurations.

What types of plugins/connectors are supported by Apache ActiveMQ?

Maven2 plugin to easily start up a JMS broker., Destination Plugin, TimeStamp Plugin, Statistics Plugin

3)    JORAM

JORAM, which stands for “Java Open Reliable Asynchronous Messaging,” is an open source Java implementation of JMS and offers high availability for clusters. JORAM stands for Java open reliable Asynchronous Messaging and offers high availability for clusters.

JORAM offers full support for JMS 1.1, JMS 2.0 and Java EE as an open source software support component. It is perfect for a heterogeneous system and various Java platform. Programs can easily serve as mature messaging solutions without being dependent upon third-party plugins. JORAM also features atomic storage for a long queue. For remote clients and servers, it offers TCP based implementation.

What protocols do they support? 

JORAM supports asynchronous messaging and offers plugin support like AMQP, MQTT, STOMP and more. Beyond that, JORAM supports the following web protocols: TCP/IP, HTTP, SOAP-XML, SSL.

What types of plugins/connectors are supported by JORAM?

JORAM supports the following plugins and connectors: TxLog, Batch engine, Batch network

Synchronous vs. asynchronous support

Due in large part to its commitment to atomic transactions, JORAM offers optimized messaging, allowing quick transactions on persistent messages. Transaction logs are handled by asynchronous sync. JORAM offers queuing, subscription matching, message routing, and high runtime performance because it handles transactions in batches. The communication over JORAM is secure and reliable so transmission latency is never increased. Few plugins facilitate bi-directional communication over a single connection, which is a major pull factor for JORAM.

What standards do they conform to?

JORAM is extremely compliant with JMS 1.1, JMS 2.0 and Java EE. JMS plug-in applications are seamlessly integrated with the extensive supports. When it comes to protocols, JORAM supports basic internet and web-based protocols.

Configuration Considerations for High Availability clusters

High availability clusters’ configuration provider is a3servers.xml. The XML file is required only when starting a server – if you wish to start a new server, it will be necessary to stop the current server and send a new a3servers.xml to be read.

JORAM supports transient, persistent, transactional and XA messaging, and sessions can be listed down in a JTA transaction, which is easily registered as per JTA semantics. JORAM offers complete specification support to JMS 1.1 and JMS 2.0.

JORAM is an open-source software component supporting asynchronous messaging and supports heterogeneous systems from J2EETM to J2METM. It comes across successfully as a new- age, mature messaging solution with high capability. The messaging queue does not need a high degree of third-party support, as it has inherent atomic storage and a distributed JNDI server.

Comparison

Having reviewed these messaging Queue Products individually, it may be helpful to compare them side by side to gain a view of their real capabilities – along with a few other prominent players. Here, we lay out their various advantages and disadvantages in terms of Durability, Security Policies, Message Purging Policies, and Message Filtering.

The Benefits of Message Queuing Tools

There is a wide variety of benefits associated with message queueing tools, as we’ve examined in technical detail above. Here is a quick overview of some of their more general benefits that may end up informing your decisions:

  • Programs do not need direct connections.
  • Communication channels are not time-dependent with an asynchronous mode.
  • Small programs work well with a decoupled mode.
  • Events drive the communication propagation.
  • They allow prioritization and filtering of messages.
  • They allow secure message propagation.
  • They maintain data integrity.
  • They support the recovery of messages.

 Message queuing often offers improved performance, increased reliability, granular scalability and simplified decoupling. Beyond that, message queuing reduces the need for direct program communication. High scale data integrity is also provided, as the work is divided into smaller units of work.

While there are many different options that are available, make sure you take the time to examine the options and take careful stock of what each messaging queue product does or does not offer. I hope this guide has provided some insightful information that helps you determine your approach to choosing the product that best serves your needs.

AI Is Not Just for Robots

As technology continues to infuse itself into our daily lives, we are faced with certain realities that have previously existed only in science fiction novels and movies. One such technology which has often been vilified in works of fiction is the concept of Artificial Intelligence, or AI. It is great fodder for storylines, as machines that no longer have to obey their creators (humans) work to destroy people with or without provocation as if it is an inevitability. This is obviously a deterrent for many to pursue–or in other cases allow–the pursuit of this technology. However, robots that seek the destruction of mankind do not have to be the inevitable conclusion.

Managerial Decisions
Using AI in the world of business is a great way to showcase the benefits of technology. For example, decisions are often made by company managers that are based on data that has to be interpreted and modeled against patterns. Being able to recognize these patterns and interpret them to create a best-case scenario based on the current dataset is a huge skill that AI software could bring to the table. It is why pattern-oriented software and big data have been merging over the past decade. These types of decisions enhance the manager’s decision-making ability.

Mergers and Acquisitions
Another example is a mergers and acquisition manager who is trying to determine acquisition options for a new company direction. This could take weeks or even months of detailed research on hundreds of companies across the world, vetting them on any number of specific criteria. For an AI assistant, much of this could be automated, leaving the manager with the job of triaging what the software assistant approved to find the perfect match.

Research
These examples are helpful for decision making, but AI could excel at assisting with time intensive tasks. For example, a researcher may spend weeks sifting through information for specific context. Having an AI software assistant could cut that time down to days, or possibly even hours, with the ability to analyze patterns and use integrated systems in the cloud to pull information while the researcher analyzes the data even further.

These are just a few simple ways AI can benefit us in the future. These are entirely possible scenarios given the sophisticated software architectures, big data integrations and pattern-oriented software we are already seeing in the market. It doesn’t have to be scary. It doesn’t mean we will have evil overlord robots, either. It means using technology to enhance our own abilities so we can push our intelligence level beyond what we have dreamed.