Magazines, Books and Articles

Thursday, January 22, 2015

Error using cURL to POST or PUT data formatted in JSON in Windows

If you use cURL in a cmd.exe shell on Windows, an attempt to POST or PUT data formatted in JSON results in an error. An example:

This is the data:
{"Continent":"AS","ContinentName":"Asia","CountryName":"India","Capital":"New Delhi","Iso_Alpha2":"IN","Iso_Numeric":"356","Iso_Alpha3":"IND","FipsCode":"IN"}
curl -i -X PUT -d "{"Continent":"AS","ContinentName":"Asia","CountryName":"India","Capital":"New Delhi","Iso_Alpha2":"IN","Iso_Numeric":"356","Iso_Alpha3":"IND","FipsCode":"IN"}"  -H "Content-type:application/json; charset=UTF-8" http://192.168.1.6:5984/geodb-country-test/IN

HTTP/1.1 400 Bad Request
Server: CouchDB/1.6.1 (Erlang OTP/R16B02)
Date: Thu, 22 Jan 2015 06:28:26 GMT
Content-Type: text/plain; charset=utf-8
Content-Length: 48
Cache-Control: must-revalidate

{"error":"bad_request","reason":"invalid_json"}
This is because the cmd.exe shell on Windows has issues parsing the quotes in the data.

One work around is to escape all quotes in the JSON. The following works:
curl -i -X PUT -d "{\"Continent\":\"AS\",\"ContinentName\":\"Asia\",\"CountryName\":\"India\",\"Capital\":\"New Delhi\",\"Iso_Alpha2\":\"IN\",\"Iso_Numeric\":\"356\",\"Iso_Alpha3\":\"IND\",\"FipsCode\":\"IN\"}"  -H "Content-type:application/json; charset=UTF-8" http://192.168.1.6:5984/geodb-country-test/IN

HTTP/1.1 201 Created
Server: CouchDB/1.6.1 (Erlang OTP/R16B02)
Location: http://192.168.1.6:5984/geodb-country-test/IN
ETag: "1-d4c8b330d781b982184e0e6829f434cd"
Date: Thu, 22 Jan 2015 06:31:26 GMT
Content-Type: text/plain; charset=utf-8
Content-Length: 65
Cache-Control: must-revalidate

{"ok":true,"id":"IN","rev":"1-d4c8b330d781b982184e0e6829f434cd"}
Escaping all the quotes in the data would be a huge pain; here's an online tool that will do all the donkey's work.

Another work around is to write the data to a file and use it in the following manner:
curl -i -X PUT -d @"C:/Shared Folder/country.json"  -H "Content-type:application/json; charset=UTF-8" http://192.168.1.6:5984/geodb-country-test/countries

HTTP/1.1 201 Created
Server: CouchDB/1.6.1 (Erlang OTP/R16B02)
Location: http://192.168.1.6:5984/geodb-country-test/countries
ETag: "1-568f829b1bcc952ba27ca7a084428390"
Date: Thu, 22 Jan 2015 07:42:12 GMT
Content-Type: text/plain; charset=utf-8
Content-Length: 72
Cache-Control: must-revalidate

{"ok":true,"id":"countries","rev":"1-568f829b1bcc952ba27ca7a084428390"}
cURL
Windows Installer

Tuesday, December 2, 2014

RESTful web service - versioning the API

This is the fourth post in this series of posts on RESTful web service.

Part 1, Part 2, Part 3

In this post we discuss the need for versioning an API and the possible ways to do it. The need for versioning is not restricted to the REST scenario alone - whatever is applicable to the REST scenario is also applicable to HTTP web services.

APIs don't exist for themselves - they are meant to be consumed by clients. When an API is considered stable and shipped to production, a client consuming the API buys into the media type and the representation of the resources it describes. Now if the API was to undergo a change - and it will, as change is the only constant in our industry - its clients' will in all probability break.

This is the fundamental reason to version APIs.

Read the full post here.

Wednesday, November 12, 2014

RESTful web service - media types and resource representation

This is the third in this series of posts on RESTful web service.

Part 1, Part 2, Part 4

In this post we will examine the role of internet media types, especially the 'application' media type in a REST service.

The media type, to a large extent, describes the contract between the server and the client in a REST web service.

Read the full article here.

Sunday, October 19, 2014

RESTful web service - how do I design one?

This is the second post of the series on RESTful Web Service.

Part 1, Part 3, Part 4

In this post we discuss how to design and build a REST API (a collection of REST web service).

As with any design we ask ourselves what our end goals are, and then decide how we go about achieving these. I believe the questions below set out what the minimum end goals are when designing a successful REST API.

1. How do we discover a Resource? What is its URI? How do we map HTTP methods to the actions possible in an entity?
2. What is the representation of the resource?
3. How do we decide on a contract for the services? 

Answers to question 1 and 2 are necessary for designing both HTTP and REST web service; the answer to question 3 really define how to make the service REST.

We'll cover the first two in this post, and the third in a subsequent post.

Read the full article here.

Tuesday, September 30, 2014

RESTful web service - what is this?

Many of us have, at some point or the other, created and consumed web services. Many of us have created and consumed RESTful (or REST) web services, because that is the trend nowadays. Or have we actually created a HTTP based web service and not really a REST web service? It's pretty easy to create a HTTP based web service with, for example, Microsoft's ASP.NET Web API 2 framework and provide the consumer of the service with a JSON payload. But there is more to a REST web service than 'just returning a JSON payload'.

So what is a REST web service?

Read the full article here.

Part 2
Part 3
Part 4

Saturday, May 3, 2014

The rise and rise of data

Even a cursory study of the history of the internet, especially after the advent of the World Wide Web, indicates the power of the medium. In terms of users, it has grown 588.4% between Dec 2000 and June 2013, from around 369 million to 2.4 billion, though only 34.3% of the world's population had access to the internet in this period [source]. The penetration is expected to grow to 45% by 2016, bringing the digital world to almost 3.4 billion users.

Businesses and governments have taken advantage of the internet, especially of the WWW, to create applications that many of us can't do without. Shopping for most of your needs, reservation of your airline, train, bus tickets, planning and managing your holiday tours, most banking activities, paying your bills, buying and renewing your insurance, filing your tax returns - all of these can be done over the internet. Streaming music and video keeps us entertained, social networking applications and blogs fulfil our need to share, Skype and WhatsApp help us to connect. Google Drive, Onedrive, Dropbox store our important documents and allow us to share restrictively. Almost every sphere of activity have applications dedicated to it, photography, ornithology, skill enhancement, stocks, funding of ideas to production, you name it, all allowing us to generate and use content.

All of these generate data, huge volumes of it. The infographic in this article gives an indication of how much user driven data is generated every minute. CISCO estimates total Global IP traffic by 2017 will be 120643 PB per month up from 43570 PB per month in 2012.(What is a Petabyte (PB)? also see here).

Not all data is user generated.

A example is India's 'Adhar' initiative, implemented by the Unique Identification Authority of India [UIDAI]. Briefly it aims to provide a 12 digit unique identification number to every Indian (that's over a billion people) starting August 2009. It's mandate is to provide these numbers to 600 million (60 crores) citizens by 2014. It enrolls a citizen by collecting his/her iris and thumb scans and demographic data. Each enrollment pack is 5 MB of data. Till now it has generated 1500 PB of data [source].

Every department of governments generate a humungous amount of data. Analysis of these data create more data, usually in the form of reports. Government rules and regulations mandate business to generate and publish to the public domain their Annual Reports. Organisations like Lexis Nexis collate business data from the public domain all over the world and create tools such as their Dossier Suite.

Look around you and everyone is creating data and very large volumes of it: the United Nations, stock exchanges and financial and credit rating organisations, researchers and scientists, even machines and processes where sensors report the working parameters at regular intervals.

The question then is: How do we persist and maintain large volumes of data? And how do we use these? We’ll check these out in subsequent posts.

Tuesday, April 8, 2014

Commodity hardware

One of the USP of NoSQL databases is their ability to run on 'commodity hardware/ machines/ servers/ clusters'. So what is commodity hardware?

My desktop PC is powered by an Intel i5 4th gen processor, has 16 GB (initially 8, upgraded later to 16) of RAM and 1 TB HDD. The hardware is reliable and affordable, even though it was built by an assembler. It is easy to upgrade when required - like I did with the RAM. I have no vendor buy in; my first preference for an upgrade would be the current vendor, but it is not necessary for me to buy from him. I am also not too concerned about the make of the components - after all the industry has matured to an extent where there is not much to distinguish between different makes.

The same would be the case with a branded desktop that you may have bought. You may have vendor buy in because of the warranty, but nobody really can stop you from adding memory or storage or a game card, or maybe even replacing the motherboard and the CPU.

This essentially is then a commodity machine:
Affordable, reliable and upgrade-able
No (or limited) vendor buy in

A commodity cluster is made up of commodity machines working in parallel to increase computing power. Serious supercomputers have been built this way; the nitty-gritty of building and maintaining clusters are known. In the NoSQL context, clusters increase performance and provide scalability. Add machines to the cluster if demand increases; maybe even remove a machine if the demand reduces and do these without disrupting services.

A start-up from Bangalore has built a data center using commodity hardware. A case study worth reading.