Tuesday, July 30, 2013

The Z3 Constraint Solver, a developer perspective

Z3 is a high-performance SMT solver developed at Microsoft Research. It have been integrated with many many tools that came out from Microsoft for program analysis, testing, and verification.

What SAT means ?

SAT refers to Boolean satisfiability problem where we want to determine if there exists an interpretation that satisfies a given Boolean formula. In other words, it establishes if the variables of a given Boolean formula can be assigned in such a way as to make the formula evaluate to true.

What SMT means ?

SMT stands for Satiability Modulo Theories. SMT instance is a formula in first-order logic, where some functions and predicates have additional interpretations. SMT problem is a decision problem of determining whether such a formula is satisfiable or not.

An SMT instance is a generalization of a Boolean SAT instance in which various sets of variables are replaced by predicates over a suitable set of non-binary variables.

These predicates are classified according to the theory they belong to. For instance, linear inequalities over real variables are evaluated using the rules of the theory of linear real arithmetic.

What SMT solver means ?

The goal of an SMT solver is to determine whether an SMT instance can evaluate to true or not. The same analog applies for SAT solvers.

SMT solvers

There is a lot of SMT solvers available but there is only one SMT solver with C# APIs, it is Z3. For a list of available SMT solvers, refer to this page.

Download Z3

You can download Z3 from http://z3.codeplex.com/ . I downloaded Z3 4.3.0 and extracted it to C:\z3.

C# Example

In the following example we going to let Z3 solve the following equation system:

  • x > 0
  • y = x + 1 
  • y < 3

Solving the equations means finding values for x and y that make the whole formula evaluates to true.

  • Let’s create a new console application project in Visual Studio.
  • Add reference to Microsoft.Z3.dll which is located in the bin directory of the Z3 installation directory.
  • Copy the file libz3.dll from the bin directory of the Z3 installation directory to your project build directory.
  • Now edit your code to look like the following:
using System;
using Microsoft.Z3;

namespace Z3Demo1
{
class Program
{
static void Main(string[] args)
{
using (Context ctx = new Context())
{
Expr x = ctx.MkConst("x", ctx.MkIntSort());
Expr y = ctx.MkConst("y", ctx.MkIntSort());
Expr zero = ctx.MkNumeral(0, ctx.MkIntSort());
Expr one = ctx.MkNumeral(1, ctx.MkIntSort());
Expr three = ctx.MkNumeral(3, ctx.MkIntSort());

Solver s = ctx.MkSolver();
s.Assert(ctx.MkAnd(ctx.MkGt((ArithExpr)x, (ArithExpr)zero), ctx.MkEq((ArithExpr)y,
ctx.MkAdd((ArithExpr)x, (ArithExpr)one)), ctx.MkLt((ArithExpr)y, (ArithExpr)three)));
Console.WriteLine(s.Check());

Model m = s.Model;
foreach (FuncDecl d in m.Decls)
Console.WriteLine(d.Name + " -> " + m.ConstInterp(d));

Console.ReadLine();
}
}
}
}


Now let’s run the code above and see the output. The solver says the equation system is satisfiable and then gives us the x and y values that satisfy.



fileFProjectsZ3Demo1Z3Demo1binDebugZ3Demo1.EXE_2013-07-29_08-51-07



How it works ?



To interact with Z3 through C#, you need a Context object. Variables and numerals in your equations are modeled as Expr objects. You get these objects using member functions in the Context object (MkConts(), MkNumeral(),….). You construct your operand using member functions in the Context object (MkGt(), MkAdd(), MkLt(),…). To solve all the equations together you need to hock them up using AND operator, which is implemented using Context.MkAnd(). After hocking everything in one AND, you pass that the solver through Solver.Assert(). And as you may guessed, you obtain this Solver using Context.MkSolver().



Solver.Check() will tell you whether this equation system can be solved or not. To get the variables’ assignments that the come up with, get a Model object Solver.Model. Then use the Delcs collection to get all symbols that have an interpretation in the model. Model.ConstInterp() will get the symbol assigned value.



In this post we briefly introduced SAT, SMT, and their solvers. Then we explored the only SMT solver that written in C# and had a C# API. Now you can play with as many equations as you want and check them for satisfiability and even got the solution values.

Wednesday, July 24, 2013

Get path conditions from Microsoft Pex

We talked in a previous post about Microsoft Pex and how it choose values to explore your code through symbolically executing it and getting all its paths. A code path is the represented by all the conditions that have to be satisfied in order to make the code execution go to this path. Your code conditional statements will lead to different paths according to different input values. Test cases generated by Pex represents the code behavior for specific input values that lead to specific code path.

Within the parameterized unit test You can use the method GetPathConditionString of the PexSymbolicValue class to obtain a textual representation of the current path condition, a predicate (condition) that characterizes an execution path (below, column with title Condition) .

ConsoleApplication2 - Microsoft Visual Studio_2013-04-25_11-49-50

Methods ToRawString and GetRawPathConditionString of the PexSymbolicValue class return expressions representing symbolic values and the path condition, formatted as S-expressions.

As we know, Pex uses the Z3 SMT solver behind the scenes. If you want to get the path conditions before being passed to Z3, do the following:

  1. Create an environment variable with name z3_logdir and value c:\z3_logdir (or any other directory you want)
  2. Create an environment variable with name z3_loglevel and value constraintsandmodel .

Now, Pex will create *.z3 files in subdirectories in the z3_logdir you specified in step 1. These files are in Z3 native format and can be later passed to Z3. For more information about this format, refer to Z3 online documentation.

In future posts we will talk more about the Z3 native format and what we can do with it.

Tuesday, July 23, 2013

Bring “Sign in as Different User” back to SharePoint 2013

If you new to SharePoint 2013 like me and get confused looking for “Sign in as a Different User” menu command, don’t worry it is not there by design but we can easily bring it back.

Do the following steps to get the link back:

  • Locate the file \15\TEMPLATE\CONTROLTEMPLATES\Welcome.ascx and open it for editing.
  • Add the following snippet before the existing element that have id of “ID_RequestAccess”.

Text="<%$Resources:wss,personalactions_loginasdifferentuser%>"
Description="<%$Resources:wss,personalactions_loginasdifferentuserdescription%>"
MenuGroupId="100"
Sequence="100"
UseShortId="true"
/>

Save and close the file. Now you should get the link “Sign in as Different User” back.

Wednesday, July 17, 2013

Create a new Web Part Page with Quick Launch menu

This post is about a simple task we do daily in SharePoint 2010, creating a new Web Part page. Every time we need a new Web Part page, we go to All Site Content > Create > Web Part Page > Create. Now we have a Web Part page but no quick launch menu on the left.

To show the quick launch menu on the new created page:

  • Open the page for editing in SharePoint 2010.
  • Switch to the Code view.
  • Click on the Advanced Mode button on the ribbon.
  • Now look for the following code snippet and delete it.

<SharePoint:UIVersionedContent ID="WebPartPageHideQLStyles" UIVersion="4"runat="server">
<ContentTemplate>
body # {s4-leftpanel
display: none;
}
. s4 {ca-
margin-left: 0px;
}
</ style>
</ ContentTemplate>
</ SharePoint: UIVersionedContent>

  • Now look for the following snippet and delete it.

<asp:Content ContentPlaceHolderId="PlaceHolderLeftNavBar" runat="server">
</ asp: Content>

  • Save and Close

Now when you open the page, the quick launch navigation will be there.

Friday, July 12, 2013

Getting Started with MongoDB – Part 2

In the previous post we explored the basics of MongoDB. In this post we going to dig deeper in MongoDB.

Indexing

Whenever a new collection is created, MongoDB automatically creates an index by the _id field. These indexes can be found in the system.indexes collection. You can show all indexes in the database using db.system.indexes.find() . Most queries will include more fields than just the _id, so we need to make indexes on those fields.

Before creating more indexes, let’s see what is the performance of a sample query without creating any indexes other than the automatically created one for _id. Create the following function to generate random phone numbers.

function (area,start,stop) {
for(var i=start; i < stop; i++) {
var country = 1 + ((Math.random() * 8) << 0);
var num = (country * 1e10) + (area * 1e7) + i;
db.phones.insert({
_id: num,
components: {
country: country,
area: area,
prefix: (i * 1e-4) << 0,
number: i
},
display: "+" + country + " " + area + "-" + i
});

Run the function with a three-digit area code (like 800) and a range of seven digit numbers (5,550,000 to 5,650,000)

populatePhones( 800, 5550000, 5650000 )

Now we expecting to see a new index created for our new collection.

> db.system.indexes.find()
{ "v" : 1, "key" : { "_id" : 1 }, "ns" : "newdb.towns", "name" : "_id_" }
{ "v" : 1, "key" : { "_id" : 1 }, "ns" : "newdb.countries", "name" : "_id_" }
{ "v" : 1, "key" : { "_id" : 1 }, "ns" : "newdb.phones", "name" : "_id_" }

Now let’s check the query without an index. The explain() method is used to output details of a given operation and can help us here.

> db.phones.find( { display : "+1 800-5650001" } ).explain()
{
        "cursor" : "BasicCursor",
        "isMultiKey" : false,
        "n" : 0,
        "nscannedObjects" : 100000,
        "nscanned" : 100000,
        "nscannedObjectsAllPlans" : 100000,
        "nscannedAllPlans" : 100000,
        "scanAndOrder" : false,
        "indexOnly" : false,
        "nYields" : 0,
        "nChunkSkips" : 0,
        "millis" : 134,
        "indexBounds" : {

        },
        "server" : "ESOLIMAN:27017"
}

Just to make things simple, we will look at the millis field only which gives the milliseconds needed to complete the query. Now it is 134.

Now we going to create an index and see how it improves our query execution time. We create an index by calling ensureIndex(fields,options) on the collection. The fields parameter is an object containing the fields to be indexed against. The options parameter describes the type of index to make. On production environments, creating an index on a large collection can be slow and resource-intensive, you should create them in off-peak times. In our case we going to build a unique index on the display field and we will drop duplicate entries.

> db.phones.ensureIndex(
... { display : 1 },
... { unique : true, dropDups : true }
... )

lets try explain() of find() and see the new value for millis field. Query execution time improved, from 134 down to 16.

> db.phones.find( { display : "+1 800-5650001" } ).explain()
{
        "cursor" : "BtreeCursor display_1",
        "isMultiKey" : false,
        "n" : 0,
        "nscannedObjects" : 0,
        "nscanned" : 0,
        "nscannedObjectsAllPlans" : 0,
        "nscannedAllPlans" : 0,
        "scanAndOrder" : false,
        "indexOnly" : false,
        "nYields" : 0,
        "nChunkSkips" : 0,
        "millis" : 16,
        "indexBounds" : {
                "display" : [
                        [
                                "+1 800-5650001",
                                "+1 800-5650001"
                        ]
                ]
        },
        "server" : "ESOLIMAN:27017"
}

Notice the cursor changed from a Basic to a B-tree cursor. MongoDB is no longer doing
a full collection scan but instead walking the tree to retrieve the value.

Mongo can build your index on nested values: db.phones.ensureIndex({ "components.area": 1 }, { background : 1 })

Aggregations

count() counts the number of matching documents. It takes a query and returns a number.

> db.phones.count({'components.number': { $gt : 5599999 } })
100000

distinct() returns each matching value where one or more exists.

> db.phones.distinct('components.number', {'components.number': { $lt : 5550005 } })
[ 5550000, 5550001, 5550002, 5550003, 5550004 ]

group() groups documents in a collection by the specified keys and performs simple aggregation functions such as computing counts and sums. It is similar to GROUP BY in SQL. It accepts the following parameters

  • key – Specifies one or more document fields to group by.
  • reduce – Specifies a function for the group operation perform on the documents during the grouping operation, such as compute a sum or a count. The aggregation function takes two arguments: the current document and the aggregate result for the previous documents in the group.
  • initial – Initializes the aggregation result document.
  • keyf – Optional. Alternative to the key field. Specifies a function that creates a “key object” for use as the grouping key. Use the keyf instead of key to group by calculated fields rather than existing document fields. Like HAVING in SQL.
  • cond – Optional. Specifies the selection criteria to determine which documents in the collection to process. If you omit the cond field, db.collection.group() processes all the documents in the collection for the group operation.
  • finalize – Optional. Specifies a function that runs each item in the result set before db.collection.group() returns the final value. This function can either modify the result document or replace the result document as a whole.

> db.phones.group({
... initial : { count : 0 },
... reduce : function(phone, output) { output.count++; },
... cond : { 'components.number' : { $gt : 5599999 } },
... key : { 'components.area' : true }
... })
[
        {
                "components.area" : 800,
                "count" : 50000
        },
        {
                "components.area" : 855,
                "count" : 50000
        }
]

The first thing we did here was set an initial object with a field named count set to 0—fields created here will appear in the output. Next we describe what to do with this field by declaring a reduce function that adds one for every document we encounter. Finally, we gave group a condition restricting which documents to reduce over.

Server-Side Commands

All queries and operations we did till now, execute on the client side. The db object provides a command named eval(), which passes the given function to the server. This dramatically reduces the communication between client and server. It is similar to stored procedures in SQL.

There is a also a set of prebuilt commands that can be executed on the server. Use db.listCommands() to get a list of these commands. To run any command on the server use db.runCommand() like db.runCommand({ "count" : "phones" })

Although it is not recommended, you can store a JavaScript function on the server for later reuse.

MapReduce

MapReduce is a framework for parallelizing problems. Generally speaking the parallelization happens on two steps:

  • "Map" step: The master node takes the input, divides it into smaller sub-problems, and distributes them to worker nodes. A worker node may do this again in turn, leading to a multi-level tree structure. The worker node processes the smaller problem, and passes the answer back to its master node.
  • "Reduce" step: The master node then collects the answers to all the sub-problems and combines them in some way to form the output – the answer to the problem it was originally trying to solve.

To show the MapReduce framework in action, let’s build on the phones collections that we created previously. Let’s generate a report that counts all phone numbers that contain the same digits for each country.

First we create a helper function that extracts an array of all distinct numbers (this step is not a MapReduce step).

> distinctDigits = function(phone) {
... var
... number = phone.components.number + '',
... seen = [],
... result = [],
... i = number.length;
... while(i--) {
...  seen[+number[i]] = 1;
...  }
... for (i=0; i<10; i++) {
...  if (seen[i]) {
...   result[result.length] = i;
...   }
...  }
... return result;
... }

> db.eval("distinctDigits(db.phones.findOne({ 'components.number' : 5551213 }))")
[ 1, 2, 3, 5 ]

Now let’s find find distinct numbers of each country. Since we need to query by country later, we will add the distinct digits array and country as compound key. For each distinct digits array in each country, we will add a count field that hold the value 1.

> map = function() {
... var digits = distinctDigits(this);
... emit( { digits : digits, country : this.components.country } , { count : 1 } );
... }

The reducer function will all these 1s that have been emitted from the map function.

>reduce = function(key, values) {
... var total = 0;
... for(var i=0; i<values.length; i++) {
...  total += values[i].count;
...  }
...  return { count : total };
... }

Now it is time to put all pieces together and start the whole thing (the input collection, map function, reduce function, output collection).

> results = db.runCommand({
... mapReduce : 'phones',
... map : map,
... reduce : reduce,
... out : 'phones.report'
... })
{
        "result" : "phones.report",
        "timeMillis" : 21084,
        "counts" : {
                "input" : 200000,
                "emit" : 200000,
                "reduce" : 48469,
                "output" : 3489
        },
        "ok" : 1
}

Now you can query the output collection like any other collection

> db.phones.report.find()
{ "_id" : { "digits" : [  0,  1,  2,  3,  4,  5,  6 ], "country" : 1 }, "value" : { "count" : 37 } }
{ "_id" : { "digits" : [  0,  1,  2,  3,  4,  5,  6 ], "country" : 2 }, "value" : { "count" : 23 } }
{ "_id" : { "digits" : [  0,  1,  2,  3,  4,  5,  6 ], "country" : 3 }, "value" : { "count" : 17 } }
{ "_id" : { "digits" : [  0,  1,  2,  3,  4,  5,  6 ], "country" : 4 }, "value" : { "count" : 29 } }
{ "_id" : { "digits" : [  0,  1,  2,  3,  4,  5,  6 ], "country" : 5 }, "value" : { "count" : 34 } }
{ "_id" : { "digits" : [  0,  1,  2,  3,  4,  5,  6 ], "country" : 6 }, "value" : { "count" : 35 } }
{ "_id" : { "digits" : [  0,  1,  2,  3,  4,  5,  6 ], "country" : 7 }, "value" : { "count" : 33 } }
{ "_id" : { "digits" : [  0,  1,  2,  3,  4,  5,  6 ], "country" : 8 }, "value" : { "count" : 32 } }
{ "_id" : { "digits" : [  0,  1,  2,  3,  5 ], "country" : 1 }, "value" : { "count" : 5 } }
{ "_id" : { "digits" : [  0,  1,  2,  3,  5 ], "country" : 2 }, "value" : { "count" : 7 } }
{ "_id" : { "digits" : [  0,  1,  2,  3,  5 ], "country" : 3 }, "value" : { "count" : 3 } }
{ "_id" : { "digits" : [  0,  1,  2,  3,  5 ], "country" : 4 }, "value" : { "count" : 6 } }
{ "_id" : { "digits" : [  0,  1,  2,  3,  5 ], "country" : 5 }, "value" : { "count" : 5 } }
{ "_id" : { "digits" : [  0,  1,  2,  3,  5 ], "country" : 6 }, "value" : { "count" : 10 } }
{ "_id" : { "digits" : [  0,  1,  2,  3,  5 ], "country" : 7 }, "value" : { "count" : 5 } }
{ "_id" : { "digits" : [  0,  1,  2,  3,  5 ], "country" : 8 }, "value" : { "count" : 7 } }
{ "_id" : { "digits" : [  0,  1,  2,  3,  5,  6 ], "country" : 1 }, "value" : { "count" : 95 } }
{ "_id" : { "digits" : [  0,  1,  2,  3,  5,  6 ], "country" : 2 }, "value" : { "count" : 104 } }
{ "_id" : { "digits" : [  0,  1,  2,  3,  5,  6 ], "country" : 3 }, "value" : { "count" : 108 } }
{ "_id" : { "digits" : [  0,  1,  2,  3,  5,  6 ], "country" : 4 }, "value" : { "count" : 113 } }
Type "it" for more

or

> db.phones.report.find({'_id.country' : 8})
{ "_id" : { "digits" : [  0,  1,  2,  3,  4,  5,  6 ], "country" : 8 }, "value" : { "count" : 32 } }
{ "_id" : { "digits" : [  0,  1,  2,  3,  5 ], "country" : 8 }, "value" : { "count" : 7 } }
{ "_id" : { "digits" : [  0,  1,  2,  3,  5,  6 ], "country" : 8 }, "value" : { "count" : 127 } }
{ "_id" : { "digits" : [  0,  1,  2,  3,  5,  6,  7 ], "country" : 8 }, "value" : { "count" : 28 } }
{ "_id" : { "digits" : [  0,  1,  2,  3,  5,  6,  8 ], "country" : 8 }, "value" : { "count" : 27 } }
{ "_id" : { "digits" : [  0,  1,  2,  3,  5,  6,  9 ], "country" : 8 }, "value" : { "count" : 29 } }
{ "_id" : { "digits" : [  0,  1,  2,  3,  5,  7 ], "country" : 8 }, "value" : { "count" : 10 } }
{ "_id" : { "digits" : [  0,  1,  2,  3,  5,  8 ], "country" : 8 }, "value" : { "count" : 7 } }
{ "_id" : { "digits" : [  0,  1,  2,  3,  5,  9 ], "country" : 8 }, "value" : { "count" : 8 } }
{ "_id" : { "digits" : [  0,  1,  2,  4,  5 ], "country" : 8 }, "value" : { "count" : 3 } }
{ "_id" : { "digits" : [  0,  1,  2,  4,  5,  6 ], "country" : 8 }, "value" : { "count" : 121 } }
{ "_id" : { "digits" : [  0,  1,  2,  4,  5,  6,  7 ], "country" : 8 }, "value" : { "count" : 25 } }
{ "_id" : { "digits" : [  0,  1,  2,  4,  5,  6,  8 ], "country" : 8 }, "value" : { "count" : 27 } }
{ "_id" : { "digits" : [  0,  1,  2,  4,  5,  6,  9 ], "country" : 8 }, "value" : { "count" : 17 } }
{ "_id" : { "digits" : [  0,  1,  2,  4,  5,  7 ], "country" : 8 }, "value" : { "count" : 4 } }
{ "_id" : { "digits" : [  0,  1,  2,  4,  5,  8 ], "country" : 8 }, "value" : { "count" : 4 } }
{ "_id" : { "digits" : [  0,  1,  2,  4,  5,  9 ], "country" : 8 }, "value" : { "count" : 7 } }
{ "_id" : { "digits" : [  0,  1,  2,  5 ], "country" : 8 }, "value" : { "count" : 14 } }
{ "_id" : { "digits" : [  0,  1,  2,  5,  6 ], "country" : 8 }, "value" : { "count" : 162 } }
{ "_id" : { "digits" : [  0,  1,  2,  5,  6,  7 ], "country" : 8 }, "value" : { "count" : 95 } }
Type "it" for more

The unique emitted keys are under the field _id, and all of the data returned from the reducers are
under the field value. If you prefer that the mapreducer just output the results, rather than outputting to a collection, you can set the out value to { inline : 1 }, but bear in mind there is a limit to the size of a result you can output (16 MB).

In some situations you may need to feed the reducer function’s output into another reducer function. In these situations we need to carefully handle both cases: either map’s output or another reduce’s output.

MongoDB have so many features that we didn’t even mentioned here. In later posts will continue working on them.

Getting Started with MongoDB – Part 1


MongoDB (from "humongous") is an open source document-oriented database system developed and supported by 10gen (founded by Dwight Merriman). First publicly released in 2009, and since then it have been a rising star in the NoSQL world. MongoDB stores structured data as JSON-like documents with dynamic schemas (technically data is stored in a binary form of JSON known as BSON), making the integration of data in certain types of applications easier and faster.

Installation

  1. Download the latest mongoDB version from here.
  2. Extract the archive to your preferred location (in my case C:\mongodb). MongoDB is self-contained and does not have any other system dependencies. You can run MongoDB from any folder you choose. You may install MongoDB in any directory.
  3. MongoDB requires a data folder to store its files (default is C:\data\db ). You may specify a different path with the dbpath setting when lunching mongod.exe.

Starting the Server

To start MongoDB, open the command prompt window, and run mongod.exe from the bin directory (specify the data path if needed)
CWindowssystem32cmd.exe - mongod.exe  --dbpath Cmongodbdata_2013-07-09_16-10-49The waiting for connections message in the console output indicates that the mongod.exe process is running successfully and waiting for connections on port 27017

Connecting to the Server

To connect to the server, open another command prompt window and run mongo.exe from the bin directory.
CWindowssystem32cmd.exe - mongo_2013-07-09_16-27-37

Run MongoDB as a Windows Service

  1. Create a log file for MongoDB (in my case c:\mongodb\log\mongo.log ).
  2. Create a data directory for MongoDB (in my case c:\mongodb\data ).
  3. Open the command prompt window as an administrator.
  4. Run the following command C:\mongodb\bin>mongod.exe –install –rest –master --logpath "c:\mongodb\log\mongo.log"
  5. Run regedit from start menu.
  6. Go to HKEY_LOCAL_MACHINE >> SYSTEM >> CurrentControlSet >> services
  7. Find the MongoDB directory & edit the ImagePath key.
  8. Set value as c:\mongodb\bin\mongod --service  --rest  --master  --logpath=C:\mongodb\log\mongo.log  --dbpath=C:\mongodb\data
  9. Save and exit registry editor
  10. Open ComponentServices from Start menu >> Run
  11. Locate the Mongo DB service, and reigh click >> Properties.
  12. Set the Startup Type to Automatic. Then start the service.
    1. To run the MongoDB service from command window, use net start MongoDB
  13. Check at http://localhost:28017/ to see , MongoDB should return stats.
In case you want to remove MongoDB service C:\mongodb\bin\mongod.exe –remove

Data Model

Data in MongoDB has a flexible schema.
  • Database consists on a set of collections.
  • Collections consists of a set of documents.
    • Documents in the same collection do not need to have the same set of fields or structure.
    • Common fields in a collection’s documents can hold different types of data.
Based on what we mentioned, you could say that MongoDB is schema-less. But you may refer to this data modeling article before doing real work on MongoDB.

CRUD operations

  • When you start the mongo shell, it connect to test database by default. To create a new database or switch to another database, use use newdb
  • To show a list of databases, use show dbs (databases are created when you insert the first values in it. This means that any database have been created but no values inserted in it, it really doesn’t exist).
  • To confirm the current session database, use db
  • To create a collection, just insert an initial record to it. Since Mongo is schema-less, there is no need to define anything up front. The following code creates/inserts a towns collection:
db.towns.insert({
name: "New York",
population: 22200000,
last_census: ISODate("2009-07-31"),
famous_for: [ "statue of liberty", "food" ],
mayor : {
name : "Michael Bloomberg",
party : "I"
}
})

    • brackets like {...} denote an object with key-value pairs.
    • brackets like [...] denote an array.
    • You can nest these values to any depth.
  • To show a list of collections, use show collections.
  • To list the contents of a collection, use db.towns.find()
    • You will see a system generated field _id ( composed of a timestamp, client machine ID, client process ID, and a 3-byte incremented counter) (you can override this system generated
  • MongoDB commands are JavaScript functions.
    • db is a JavaScript object that contains information about the current database. Try typeof db
    • db.x is a JavaScript object that represent a collection named x within the current database. Try typeof db.towns
    • db.x.help() will list available functions related to the given object. Try typeof db.towns.insert
    • If you want to inspect the source code of a function, call it without parameter or parentheses.
  • Functions You can create JavaScript functions and call them on the mongo shell like:
function insertCity( name, population, last_census, famous_for, mayor_info) {
db.towns.insert({
name:name,
population:population,
last_census: ISODate(last_census),
famous_for:famous_for,
mayor : mayor_info
});
}

insertCity("Punxsutawney", 6200, '2008-31-01', ["phil the groundhog"], { name : "Jim Wehrle" } )
insertCity("Portland", 582000, '2007-20-09', ["beer", "food"], { name : "Sam Adams", party : "D" } )

Now have three towns in our collection
Command Prompt - mongo_2013-07-11_14-35-39
  • To get a specific document, we only need _id passed to find() function in type ObjectId(findOne() retrieves only one matching document). String can be converted to ObjectId using ObjectId(str) function.
 db.towns.find({ "_id" : ObjectId("51def56c1cf66f4c40bb7f4a") })
    • The find() function also accepts an optional second parameter: a fields object we can use to filter which fields are retrieved. If we want only the town name (along with _id), pass in name with a value resolving to 1 (or true).
db.towns.find({ "_id" : ObjectId("51def56c1cf66f4c40bb7f4a") }, { name : 1})
    • To retrieve all fields except name, set name to 0 (or false or null).
db.towns.find({ "_id" : ObjectId("51def56c1cf66f4c40bb7f4a") }, { name : 0})
    • You can retrieve documents based on criteria other than _id. You can use regular expressions or any operator.
db.towns.find( { name : /^P/, population : { $lt : 10000 } }, { name : 1, population : 1 } )
We said before that the query language is JavaScript, which means we can construct operations as we would construct objects. In the following query, we build a criteria where the population must be between 10.000 and 1 million. Ranges work also on dates.
> var population_range = {}
> population_range['$lt'] = 100000
100000
> population_range['$gt'] = 10000
10000
> population_range['$lt'] = 1000000
1000000
> population_range['$gt'] = 10000
10000
> db.towns.find( {name : /^P/, population : population_range }, {name: 1})
{ "_id" : ObjectId("51df08e72476b99608460870"), "name" : "Portland" }

    • You can also query based on values in nested arrays, either matching exact values or matching partial values or all matching values or the lack of matching values.
> db.towns.find( { famous_for : 'food' }, { _id : 0, name : 1, famous_for : 1 } )
{ "name" : "New York", "famous_for" : [  "statue of liberty",  "food" ] }
{ "name" : "Portland", "famous_for" : [  "beer",  "food" ] }
> db.towns.find( { famous_for : /statue/ }, { _id : 0, name : 1, famous_for : 1 } )
{ "name" : "New York", "famous_for" : [  "statue of liberty",  "food" ] }
> db.towns.find( { famous_for : { $all : ['food', 'beer'] } }, { _id : 0, name:1, famous_for:1 } )
{ "name" : "Portland", "famous_for" : [  "beer",  "food" ] }
> db.towns.find( { famous_for : { $nin : ['food', 'beer'] } }, { _id : 0, name : 1, famous_for : 1 } )
{ "name" : "Punxsutawney", "famous_for" : [  "phil the groundhog" ] }

    • You can query a sub-document by giving the field name as a string separating nested layers with a dot.
> db.towns.find( { 'mayor.party' : 'I' }, { _id : 0, name : 1, mayor : 1 } )
{ "name" : "New York", "mayor" : { "name" : "Michael Bloomberg", "party" : "I" } }

    • To query the nonexistence of a field value
> db.towns.find( { 'mayor.party' : { $exists : false } }, { _id : 0, name : 1, mayor : 1 } )
{ "name" : "Punxsutawney", "mayor" : { "name" : "jim Wehrle" } }

  • elemMatch

$elemMatch helps us specify if a document or a nested document matches all of our criteria, the document counts as a match and returned. We can use any advanced operators within this criteria. To show it in action, let’s insert some data into a new collection countries
> db.countries.insert({ _id : "us",
... name : "United States",
... exports : {
...  foods : [
...   { name : "bacon", tasty : true },
...   { name : "burgers" }
...          ]
...            }
... })
> db.countries.insert({ _id : "ca",
... name : "Canada",
... exports : {
...  foods : [
...   { name : "bacon", tasty : false },
...   { name : "syrup", tasty : true }
...          ]
...             }
... })
> db.countries.insert({ _id : "mx",
... name : "Mexico",
... exports : {
...  foods : [
...   { name : "salsa", tasty : true, condiment : true }
...          ]
...           }
... })
> print( db.countries.count() )
3

Now if we need to select countries that export tasty bacon, we should $elemMatch in a query like:
> db.countries.find(
... {
...   'exports.foods' : {
...    $elemMatch : {
...      name : "bacon",
...      tasty : true
...     }
...   }
... },
... { _id : 0, name : 1 }
... )
{ "name" : "United States" }

If we didn’t used $elemMatch and wrote a query like the following, it will return countries that export bacon or tasty food, not tasty bacon
> db.countries.find(
... { 'exports.foods.name' : 'bacon' , 'exports.foods.tasty' : true },
... { _id : 0, name : 1 }
... )
{ "name" : "United States" }
{ "name" : "Canada" }

  • Boolean Operators

$or a prefix for criteria to return document that match either condition1 or condition2. There is a a lot of operators you can use.
> db.countries.find({
... $or : [ { _id : "mx" } , { name : "United States" } ] },
... {_id : 1} )
{ "_id" : "us" }
{ "_id" : "mx" }

Update

The find() function toke two parameters, criteria and list of fields to return. The update() function works the same way. The first parameter is a criteria (the same way you will use to retreive the document through find()). The second parameter is an object whose fields will replace the matched document(s) or a modifier operation ($set to set a field value, $unset to delete a field, $inc to increment a field value by a number).
The following query will set the field state with the string OR for the matching document.
db.towns.update( { _id : ObjectId("4d0ada87bb30773266f39fe5") }, { $set : { "state" : "OR" } } )
but the following query will replace the matching document with a new document { state : “OR”}
db.towns.update( { _id : ObjectId("4d0ada87bb30773266f39fe5") }, { state : "OR" } )

References

Although MongoDB is schema-less but you can make one document reference another document using a construct like { $ref : “collection_name”, $id : “reference_id” }. In the following query we linking New York town with the country US. Notice the display of the new country field in the New York town document.
> db.towns.update(
... { _id : ObjectId("51def56c1cf66f4c40bb7f4a") },
... { $set : { country : { $ref : "countries", $id : "us" } } }
... )
> db.towns.find( { _id : ObjectId("51def56c1cf66f4c40bb7f4a") } )
{ "_id" : ObjectId("51def56c1cf66f4c40bb7f4a"), "country" : DBRef("countries", "us"), "famous_for" : [  "statue of liberty",  "food" ], "last_census" : ISODate("2009-07-31T00:00:00Z"), "mayor" : { "na
me" : "Michael Bloomberg", "party" : "I" }, "name" : "New York", "population" : 22200000 }
Now we can retrieve New york from towns collection, then use it to retrieve its country
> var NY = db.towns.findOne( { _id : ObjectId("51def56c1cf66f4c40bb7f4a") } )
> db.countries.findOne( { _id : NY.country.$id })
{
        "_id" : "us",
        "name" : "United States",
        "exports" : {
                "foods" : [
                        {
                                "name" : "bacon",
                                "tasty" : true
                        },
                        {
                                "name" : "burgers"
                        }
                ]
        }
}

or in a different way > db[ NY.country.$ref ].findOne( { _id : NY.country.$id} )

Delete

Removing documents from a collections is simple, use your criteria with a call to remove() function and all matching documents will be removed. It’s a recommended practice to build your criteria in an object, use that criteria to ensure that matching documents are the expected ones, pass this criteria to remove() function.
> var bad_bacon = { 'exports.foods' : {
... $elemMatch : { name : 'bacon', tasty : false }
... } }
> db.countries.find ( bad_bacon)
{ "_id" : "ca", "name" : "Canada", "exports" : { "foods" : [    {       "name" : "bacon",       "tasty" : false },      {       "name" : "syrup",       "tasty" : true } ] } }
> db.countries.remove( bad_bacon)
> db.countries.count()
2

Reading with Code

You could ask MongoDb to run a decision function across documents.
db.towns.find(  function() {
return this.population > 6000 && this.population < 600000;
} )
or in a short-hand format db.towns.find("this.population > 6000 && this.population < 600000")
or combine custom code with other criteria using $where
db.towns.find( {
$where : "this.population > 6000 && this.population < 600000",
famous_for : /groundhog/
} )
Custom code should be your last resort due to the following:
  1. Custom code queries run slower than regular queries.
  2. Custom code queries can’t be indexed.
  3. MongoDb can’t optimize custom code queries.
  4. If your custom code assume the existence of a particular field and this field is missing in a single, the entire query will fail.
In this post we explored the basics of MongoDB, a rising star in the NoSQL family and the most common document database. We saw how to install and configure it. We saw how we can store nested structured data as JSON objects and query that data at any depth. We How we can update and delete data. In the next post we going to dig deep into MongoDB.

Tuesday, July 9, 2013

Getting Started with Apache Cassandra

Apache Cassandra is “an open source, distributed, decentralized, elastically scalable, highly available, fault-tolerant, tuneably consistent, column-oriented database that bases its distribution design on Amazon’s Dynamo and its data model on Google’s Bigtable” (source: “Cassandra: The Definitive Guide,” O’Reilly Media, 2010, p. 14).

Cassandra is built to store lots of data across a variety of machines arranged in a ring, in other words scaling horizontally, rather than vertically.

Data Model

Cassandra is based on a key-value model and it is organized according to the following concepts:

  • Column is a key-value pair.

Cassandra_DataModel_CheatSheet.pdf - Adobe Reader_2013-07-03_15-59-51

  • Column Family is a set of key-value pairs (columns in Cassandra’s terminology). They are sorted by their keys. Families are referenced and sorted by row keys.

Cassandra_DataModel_CheatSheet.pdf - Adobe Reader_2013-07-03_16-03-39

  • Super Column the value of a key-value pair can be a sequence of key-value pairs as well. In this case, the outer column would be called super column.

Cassandra_DataModel_CheatSheet.pdf - Adobe Reader_2013-07-03_16-39-29

  • Columns and Super Columns can equally be used within Column Families

Cassandra_DataModel_CheatSheet.pdf - Adobe Reader_2013-07-03_16-44-41

  • Columns or Super Columns are stored ordered by names within their Column Families

For a better understanding of Cassandra’s data model, refer to this article by Maxim Grinev.

 

Installation

  1. Download the latest Cassandra version from here (I got version 1.2.6).
  2. Extract the archive (I extracted it to C:\apache-cassandra-1.2.6 )
  3. If you don’t have Java installed on your machine, go and get it installed.
  4. Add environment variables
    1. Reight-click the my Computer icon on your desktop or start menu.
    2. Click the Advanced tab (or the Advanced System Settings)
    3. Under System Variables, click New (adjust for your own directories)
      1. Variable Name :  CASSANDRA_HOME
      2. Variable Value : C:\apache-cassandra-1.2.6
      3. click OK
    4. Under System Variables, click New (adjust for your own directories)
      1. Variable Name : JAVA_HOME
      2. Variable Value : C:\Program Files\Java\jre7
      3. click OK
  5. Now, open command window and navigate to your bin directory inside Cassandra directory (C:\apache-cassandra-1.2.6\bin in my case).
  6. Launch Cassandra by executing the comand cassandra –f (the “-f” causes it to run in the foreground). You will see lots of messages coming out. If everything goes fine, it will end up with something like this:

CWindowssystem32cmd.exe - cassandra  -f_2013-07-03_11-18-19

Now we have a running Cassandra server is expecting incoming connections on port 9160.

Once Cassandra is up and running on your machine, we can connect to the running instance using the Cassandra command-line interface, launched by running “cassandra-cli.bat”, from the Cassandra “bin” directory.

CWindowssystem32cmd.exe - cassandra-cli.bat_2013-07-03_12-16-36

Commads

  • show api version; to show the current api version.
  • describe cluster; to show a description of the current cluster.

CWindowssystem32cmd.exe - cassandra-cli.bat_2013-07-05_12-59-46

  • create keyspace TestKS; to create a keyspace and it have to be a unique name.
  • use TestKS; to switch to keyspace TestKS.
  • create column family TestCF; to create a column family TestCF within the current keyspace.
    • No other schema definition is required, the column family is a collection of name/value pairs.
  • set TestCF[ascii('TestKey')][ascii('column1')]=ascii('TestValue'); to insert the TestKey/TestValue key/value pair into the column named column1 within the column family TestCF. You can use the with ttl = x setting at the end of the set command to make the column self-delete aft x seconds of the insertion time.
    • by default Cassandra treats data as byte arrays but you can convert to other types such as Long, int, integer,…. Also timeuuid() generates new UUID. For full information about set command: type help set;
  • get TestCF[ascii("TestKey")]; to retrieve the value stored in the key TestKey within the column family TestCF

CWindowssystem32cmd.exe - cassandra-cli.bat_2013-07-05_11-15-32

  • del TestCF[ascii(‘TestKey’)][ascii(‘column2’)]; rows and columns can be deleted by specifying the row key and/or the column name
    with the del (delete) command.

CWindowssystem32cmd.exe - cassandra-cli.bat_2013-07-05_15-29-44

  • list TestCF; list the data inside a column family

CWindowssystem32cmd.exe - cassandra-cli.bat_2013-07-05_16-05-31

  • drop column family TestCF; removes a column family.
  • drop keyspace TestKS; removes a key space.

CWindowssystem32cmd.exe - cassandra-cli.bat_2013-07-05_16-25-05

==> You can insert to super columns much like inserting to normal columns. They can be read with get, written with set, and deleted with del. The super column version of these commands uses an extra ['xxx'] to represent the extra sub-index level.

CWindowssystem32cmd.exe - cassandra-cli.bat_2013-07-05_17-07-44

  • assume TestCF comparator as ascii; it decodes and helps display results of get and list requests inside the command-line interface. It can be used in the same way to set the validator and keys. By default, columns with no metadata are displayed in a hex format. This is done because row keys, column names, and column values are byte arrays. After using assume the column and value will be displayed rather than the hex code.

CWindowssystem32cmd.exe - cassandra-cli.bat_2013-07-05_17-29-07

  • Type Enforcement : Cassandra is designed to store and retrieve simple byte arrays but it also have support for built-in types. When creating or updating a column family, the user can supply column metadata that instructs the CLI Cassandra client on how to display data and help the server enforce types during insertion operations.
    create column family User with comparator = UTF8Type;
    update column family User with
            column_metadata =
            [
            {column_name: first, validation_class: AsciiType},
            {column_name: last, validation_class: AsciiType},
            {column_name: age, validation_class: IntegerType, index_type: KEYS}
            ];
  • Querying data :

set User[ascii('jsmith')][ascii('first')] = ascii('John');
set User[ascii('jsmith')][ascii('last')] = ascii('Smith');
set User[ascii('jsmith')][ascii('age')] = '38';

get User where age = '38';

  • Update is the same as set.

set User[ascii('jsmith')][ascii('first')] = ascii('Jack');

  • Quite the client;

quit;

 

In this post we just touched the Cassandra’s iceberg. In later posts we will dig more in it and how to write .net programs against it.

Monday, July 8, 2013

جريدة الشروق : تقرير شهادات سكان عمارات العبور

سكان «عمارات العبور»: ضرب النار والغاز بدأ من الحرس الجمهوري بدون مقدمات

تعليقات: 117شارك بتعليقك


نشر فى : الإثنين 8 يوليو 2013 - 4:13 م
آخر تحديث : الإثنين 8 يوليو 2013 – 4:13 م


اشتباكات الحرس الجمهوري – أرشيفية

كتب- مصطفى هاشم:

رصدت "الشروق" أحداث الحرس الجمهوري التي أودت بحياة أكثر من 50 من مؤيدي الرئيس المعزول محمد مرسي و2 من الشرطة، من خلال سكان العمارات المقابلة للحرس الجمهوري، والتي يطلق عليها عمارات"العبور".


قال الدكتور محمود سليمان أحد السكان بالعمارة المقابلة للحرس الجمهوري "استيقظت بعد الفجر مباشرة على صوت الرصاص والهليكوبتر واختناق بسبب قنابل الغاز المسيل للدموع رغم أن كل زجاج شقتنا كان مغلقا.. بدأت ارى ما يحدث من خلف الزجاج ووجدت دبابات الجيش والمدرعات ناحية المتظاهرين.. ووجدتهم يحرقون خيام المعتصمين، وكان معهم بعض الشباب الذين يرتدون زيا مدنيا".

وأضاف "بعد 5 دقائق فقط من سماعي لصوت الرصاص دق باب شقتنا ووجدت 3 من الشباب يطلبون مني مساعدتهم لأن الجيش والشرطة تلاحقهم.. وبالفعل أدخلتهم في إحدى الغرف وأغلقتها عليهم.. بعدها بدقائق بسيطة كانت الشرطة العسكرية والشرطة قد ملأت عمارتنا وبدأت في ضرب النار داخل كل أدوار العمارة وبدأوا يهددون السكان، حتى أن الآثار موجودة.. ويدقون الأبواب بقوة وقاموا بكسر باب إحدى الشقق لأن صاحبها لم يفتح لهم لأنه لم يكن فيها أحد من الأساس".

وتابع: "أعتقد أن الشرطة العسكرية ضربت النار على بعض المتظاهرين الذين كانوا يحتمون بعمارتنا.. ضرب النار من قبل الجيش والشرطة استمر لنحو ساعة كاملة وطائرة الهليكوبتر كانت تحلق في الجو لكشف ما يحدث".

وأكد سليمان أن سيارات الإسعاف لم تكن موجودة طوال الساعة، وجاءت بعدما انتهى ضرب النار لحمل الجثث من الشوارع ومن العمارات".

وحول المعتصمين الذين اختبأوا في شقته قال "لم أخرجهم إلا الساعة العاشرة، واحدا بعد الآخر لأن معظم الشقق كان يختبئ فيها معتصمون وبدأنا في إخراجهم بالتتابع".

أما محمد ثابت عامل في جراج إحدى العمارات المقابلة للحرس الجمهوري فقال "الشرطة والجيش فجأة وبدون مقدمات بدأت في ضرب النار والغاز اختنق أطفالي الصغار واستيقظوا مرعوبين.. رأيت بعض المعتصمين قد دخلوا في الجراج للاحتماء به وبعضهم اختبأ تحت بعض السيارات لكن الشرطة العسكرية والشرطة دخلوا وراءهم وبدؤوا في إخراجي أيضا معهم إلى أن خرجنا للشارع واعتبروني ضمن المعتصمين وبدؤوا في سبنا وإهانتنا وضربنا، إلى أن تعرف علي بعض الضباط والمجندين الذين يخدمون في الحرس الجمهوري لأنهم يعرفونني وأخرجوني من بينهم".

أما أحمد همام عامل أمن في إحدى مكاتب المحاسبة في إحدى العمارات المقابلة للحرس الجمهوري كنت أصلي الفجر مع المعتصمين أمام الحرس الجمهوري والإمام انتهى من الصلاة سريعا وبعدها بدأ بعض المشايخ ناشد الجيش بأن الاعتصام سلميا وأنهم إخوتهم بكن فجأة وبدون مقدمات وبدون أي استفزازات لأنني كنت معهم وهم أمام العمارة التي أعمل بها.. بدأ الجيش والشرطة في ضرب النار الحي والغاز المسيل للدموع.

وأضاف "المعتصمون بدأوا في الجري ودخلوا بعض العمارات ومنهم العمارة التي أعمل بها، ولكن الشرطة العسكرية والشرطة ومباحث أمن الدولة لاحقوهم.. وأدخلت بعض النساء والأطفال والشباب داخل الشقة ووصل عددهم إلى 40 شخصا، وكانت منهم عضوة بمجلس الشعب المنحل عن حزب الحرية والعدالة..الشرطة العسكرية هددونا وهددوا السكان بضرب النار الحي إن لخرجوا من عندهم.. البعض استجاب وضربوا النار داخل العمارة التي أعمل بها".

وأضاف همام "استمر المعتصمون في الاختباء عندي إلى الساعة 12 ظهرا وبدأت بعدذلك في إخراجهم واحدا تلو الآخر لأن الشرطة العسكرية ومباحث أمن الدولة يملئون الشوارع المحيطة بالحرس.

أما أحد المعتصمين الذين اختبأوا لدى إحدى الشقق السكنية أحمد الديب فقال "لم نهاجم الحرس الجمهوري كما يقول بعض وسائل الإعلام ولو كنا نريد ذلك كنا هاجمناهم ونحن عددنا كبير بعد صلاة العشاء ولكن نحن لا نريد أن نفعل ذلك من الأساس"

وحول بداية الأحداث قال "أيقظني احدهم من النوم لصلاه الفجر استيقظت و ذهبت للوضوء ثم ذهبت للصلاة أمام الحرس الجمهوري بدا الإمام يدعي في الركعة الثانية وبدون سابق إنذار تلعثم وأنهى الدعاء و في نهايه الصلاة بدأ الضرب من 3 محاور هي صلاح سالم، ويوسف عباس، ونادي الحرس الجمهوري نفسه".

وأضاف الديب "بدأت قنابل الغاز.. ولم نكن ندري أين نذهب فنحن محاصرون بضرب الخرطوش والآلي والرصاص الحي من كل اتجاه بدون حرمه للنساء ولا الأطفال، وسقط عدد كبير من القتلى من الأطفال والنساء وكبار السن، وسقط عدد ضخم من الإصابات والإغماءات.. اختبأت في أحد العمارات وهرع عدد من المتظاهرين معي و اختبأنا وعدد كبير من النساء دخلوا احد الشقق و أغلقنا عليهم و لم اعلم مصيرهم بعد وهجم الأمن والجيش على العمارة و اعتقلوا كل من صعد اليها و لكن حماني الله عندما فتح احد السكان شقته لي وانقطعت علاقتي بالشارع حتي الظهر".

------------------------------------------------------------

The above article is retrieved from the Google cashed version because the original have been deleted.

http://webcache.googleusercontent.com/search?q=cache%3Ahttp%3A%2F%2Fshorouknews.com%2Fnews%2Fview.aspx%3Fcdate%3D08072013%26id%3D28a98832-a914-4633-92d9-16fab99a42a1&rlz=1C1CHKZ_enEG430EG430&oq=cache%3Ahttp%3A%2F%2Fshorouknews.com%2Fnews%2Fview.aspx%3Fcdate%3D08072013%26id%3D28a98832-a914-4633-92d9-16fab99a42a1&aqs=chrome.0.57j58.1469j0&sourceid=chrome&ie=UTF-8

سكان «عمارات العبور» ضرب النار والغاز بدأ من الحرس الجمهوري بدون مقدمات - بوابة_2013-07-08_11-47-35سكان «عمارات العبور» ضرب النار والغاز بدأ من الحرس الجمهوري بدون مقدمات - بوابة_2013-07-08_11-47-56

Now the article have been removed

خطأ - بوابة الشروق - Google Chrome_2013-07-08_11-48-16

Wednesday, July 3, 2013

Redis 101– Part 2

We introduced many of the Redis’s fundamental concepts and commands in the last post. In this post we going to introduce some advanced features.

Publish-Subscribe

Previously we queued data that could be read by a blocking pop command. Using that queue, we made a very basic publish-subscribe model. Any number of messages could be pushed to this queue, and a single queue reader would pop messages as they were available. This is powerful but limited. Redis provides some specialized publish-subscribe (or pub-sub) commands.

  • SUBSCRIBE subscribe a client to a key, known as a channel in pub-sub terminology; client will block until messages are available.

Redis Client_2013-07-01_16-25-11

  • PUBLISH will push message into the channel returning how many subscribers have received it
    • Publisher window Redis Client_2013-07-01_16-27-41
    • Subscriber window receives : the string “message”, the channel name, the published message value Redis Client_2013-07-01_16-28-53
  • UNSUBSCRIBE unsubscribe or disconnect from the specified channel. If no channel name provided, it will disconnect from all channels.

Server Information

  • INFO returns a list of server data, including version, process ID, memory used, and uptime.

In order to change any of the Redis’s default configurations, you need to edit redis.config file at C:\Program Files\Redis\conf . It is fairly self-explanatory.

Durability

Redis has a few persistence options:

  • No persistence at all, which simply keeps all values in main memory.
  • Forced save
    • Use command SAVE to force server to save database to disk.
    • Use command BGSAVE to force server to save database to disk asynchronously in the background.
    • Use command LASTSAVE to get a timestamp of the last time a Redis disk write succeeded (also provided through the last_save_time field in the server INFO output).

Snapshotting

By default Redis saves snapshots of the dataset on disk, in a binary file called dump.rdb. You can configure Redis to have it save the dataset every N seconds if there are at least M changes in the dataset, or you can manually force it by calling the SAVE or BGSAVE commands as we said before. For example, the following configuration will make Redis automatically dump the dataset to disk every 60 seconds if at least 1000 keys changed (This strategy is known as snapshotting.):

save 60 1000 

 

Append-only file

Snapshotting is not very durable. If your computer running Redis stops, the latest data written on Redis will get lost. While this may not be a big deal for some applications, there are use cases for full durability, and in these cases Redis was not a viable option.

The append-only file is an alternative, fully-durable strategy for Redis. It became available in version 1.1. You can turn on the AOF in your configuration file:

appendonly yes

Then we must decide how often a command is appended to the file. Setting always is the more durable, since every command is saved. By default everysec is enabled, which saves up and writes commands only once a second. This is a decent trade-off, since it’s fast enough, and worst case you’ll lose only the last one second of data. Finally, no is an option, which just lets the OS handle flushing. It can be fairly infrequent, and not recommended.

appendfsync always

# appendfsync everysec

# appendfsync no

From now on, every time Redis receives a command that changes the dataset (e.g. SET) it will append it to the AOF. When you restart Redis it will re-play the AOF to rebuild the state. Append-only has more detailed parameters, you could refer to the online documentation for more details.

 

Security

Redis provides command-level security through obscurity, by allowing you to hide or suppress commands. This will rename the FLUSHALL command (remove all keys from the system) into some hard-to-guess value like c283d93ac9528f986023793b411e4ba2:

rename-command FLUSHALL c283d93ac9528f986023793b411e4ba2

If we attempt to execute FLUSHALL against this server, we’ll be hit with an error. We can also disable the command entirely by setting it to a blank string.

rename-command FLUSHALL ""

You can set any number of commands to a blank string to allow only a customized subset of the Redis’s commands.

Benchmarking

Redis provides an excellent benchmarking tool. It connects locally to port 6379 by default and issues 10,000 requests using 50 parallel clients. Tool test many commands and have a long output report. Tool is in C:\Program Files\Redis\

CWindowssystem32cmd.exe_2013-07-02_09-15-52

Replication

Redis supports master-slave replication. One server is the master by default if you don’t set it as a slave of anything. Data will be replicated to any number of slave servers. Configuring master-slave setting is the same as running another instance with the slaveof option in the salve’s conf file set to the master IP and port. Refer to our previous post for more details.

This concludes out Redis 101 tutorial, we tried to touch all basic and moderate features in it. Later we going to show how to write C# programs against Redis.

Tuesday, July 2, 2013

Running multiple Redis instances on the same server

Redis runs as a background process that listens to a specific port (6379 by default) for incoming requests from clients. Running multiple instances requires a separate conf file and a new init script. The conf file specifies details for the new instance, and the init script handles starting/stopping of the instance background process.

  1. Make a copy of the redis.conf file at C:\Program Files\Redis\Conf and give it a new name redis-s1.conf. Leave both files in the same directory C:\Program Files\Redis\Conf
  2. Open the redis-s1.conf with your favorite text editor (e.g. notepad ) and change the following:
    1. PID File
      • pidfile /var/run/redis-s1.pid
    2. Port
      • port 6380
    3. Log File
      • logfile "C:/Program Files/Redis/logs/redis-s1.log
    4. Data Directory (don’t forget to create that directory)
      • dir "C:/Program Files/Redis/data2"
    5. For replication only : If you planning to use this instance as a slave in master-slave replication setting (where 127.0.0.1 is the master instance IP, 6379 in the master instance port)
      • slaveof 127.0.0.1 6379
  3. To start the new instance
    1. Open a new command window
    2. Navigate to C:\Program Files\Redis
    3. Run redis-server conf\redis-s1.conf
  4. To connect to the new instance
    1. Open a new command window
    2. Navigate to C:\Program Files\Redis
    3. Run redis-cli –h 127.0.0.1 –p 6380

Now you can have multiple Redis instances on the same server like any other DBMS.