The biggest Gujarati Encyclopedia


Bhagwadgomandal is the biggest and the most prolific work in Gujarati. Visionary Maharaj Bhagvadsinhji of Gondal gifted the original Bhagwadgomandal to the world after 26 years of scientific and detailed work. This encyclopedic dictionary is a cultural milestone of Gujarati language.

Ratilal Chandaria's Gujaratilexicon Team has digitized Bhagwadgomandal and created its digital avatar. The aim is to showcase the richness of such an indispensable resource for experts and lovers of Gujarati language. It is a treasure of knowledge and every Gujarati's pride. Explore It!

Key Features:
Cultural Landmark of Gujarati Language

Comprehensive, Scientific and Rich Treasure Of Knowledge

2.81 Lakh Words, 8.22 Lakh Meanings, 26 Years Of Devotion, 9200 Pages - All At    Single Click Now

Indispensable Resource For Everyone

Available In Original Animated and Digital Form on Internet & CD

Fully Unicode


Click here to visit the originating website

Search Image by Selected Color

Google launched a new service to search image by color


Google Images has added the color selection to the interface, Ionut reports (the parameters for this appeared a while ago). Search for sun, for instance, and then pick green from the color box.....

Click here to read full article

Performance Comparison: Regex versus string operations

Do you know the difference between : RegEx Vs String Operations

Click here to read full article.


I consider regular expressions
one of the most useful features ever. I use them a lot, not only when
coding, but also when editing files and instead of copy and paste. I
find the Visual Studio Find/Replace feature with regular expressions
really useful as well. In case you are not familiar with it, you can
use regular expressions to find and replace characters like this:


In the picture, I used the expression {[^;]+}; -
meaning tag the string formed by any characters until ";" (at least one
character) and replace the matching text with "// " followed by the
tagged expression, forgetting the last ";". There are a lot of
tutorials about regular expressions. I just learned the basics, and now
I just try, fail, undo, and try again until I get it right.

Moving back to coding, .NET has great support for regular expressions.
The classes are relatively easy to use (though at the beginning I had
to play a while to find out how to capture strings in one match and
other more advanced features). The biggest advantages I find in using
Regex are that it makes parsing input very easy (once you have the
regular expression in place) and it makes it much harder to introduce
bugs - less code has by definition fewer bugs, and parsing with regular
expressions requires less code than the traditional method of parsing
strings with different string methods like get substring at different
indexes, check that is starts or ends with certain characters etc.

However, there are cases when the string concatenation and parsing
is better than the regular expressions: when the checks are done on a
path that is executed a lot (a hot path), and that has strict
performance requirements. Why? The regular expressions are slower than
string concatenation.

I did a simple experiment and measured the time needed by regex and
strings to perform the same operations. I considered I need to keep
data about persons in the format "Firstname:Oana Lastname:Platon
Money:2183 UniqueIdentifier:fwsjfjehfjkwh8r378". I have defined a
constant that represents this format, and I'll use it to serialize the
person data.

const string nameFormat = "Firstname:{0} Lastname:{1} Money:{2} UniqueIdentifier:{3}";

The data must be serialized and deserialized a lot of times (lets
say that we need to send the data on the wire frequently or something
like that). When deserializing the data, we need to make sure that it
respects the pattern and then we need to extract the firstname,
lastname etc.

1. Using regular expressions

I defined a regular expression like this:

static Regex regex = new Regex("^Firstname:(\\w+)\\sLastname:(\\w+)\\sMoney:(\\d{1,9})\\sUniqueIdentifier:([\\w-]+)$", RegexOptions.IgnoreCase | RegexOptions.Compiled);

Then, the code to parse the expressions and get the desired data is:

void ParseWithRegex(string description)


    Match m = regex.Match(description);

    if (!m.Success)


        throw new ArgumentException("description doesn't follow the expected format");


    this.firstname = m.Groups[1].Value;

    this.lastname = m.Groups[2].Value;

    if (!int.TryParse(m.Groups[3].Value, out this.age))


        throw new ArgumentException("age doesn't have the correct value");


    this.uniqueIdentifier = m.Groups[4].Value;


2. Using string operations

The verification that the given string respects the format becomes
more difficult. In our case, the patters is pretty simple, but imagine
that we needed to check an email address or something more complicated.
In that case, the code would have had a lot of cases, to follow all
possible solutions.

void ParseWithStrings(string description)


    string[] parts = description.Split(new char[] { ' ', '\t' });

    if (parts.Length != 4)


        throw new ArgumentException("description doesn't follow the expected pattern");


    this.firstname = parts[0].Substring(parts[0].IndexOf(":") + 1);

    this.lastname = parts[1].Substring(parts[1].IndexOf(":") + 1);

    if (!int.TryParse(parts[2].Substring(parts[2].IndexOf(":") + 1), out this.age))


        throw new ArgumentException("age doesn't have the correct value");


    this.uniqueIdentifier = parts[3].Substring(parts[3].IndexOf(":") + 1); ;


See that this is much more error prone than the previous code,
because it needs to look at a lot of indexes and to substract the
desired part of the string.

However, when I run the 2 methods in a loop and I measure how long
they take with a stopwatch (from System.Diagnostics namespace), I get
these results:


In conclusion, when choosing between using traditional string parsing or regular expressions, I would recommend:...MORE

Getting a list of files from a MOSS document library using a SharePoint web service

A useful link that shows how to get list of files form doc lib of MOSS using webservice


My challenge was simple. I needed to develop an SSIS package that
would download and extract data from
Publish Post
every Excel file held in document
libraries across several SharePoint sites. SSIS was the natural choice as
the data needed to be cleaned and validated before being imported into a
database. However, SSIS is not great with web services – especially in the data
flow. As I not worked with the SharePoint web services much, I started with a
good old Console application.
MOSS, or more accurately, WSS provides a whole host of web services to obtain information about
SharePoint sites. However, figuring out which method to invoke and what
parameters to pass is more problematic. Especially as many of the
parameters are chunks of Collaborative Application Mark-up Language (CAML) – a
dialect of XML developed by Microsoft specifically for use with SharePoint.
A False Start
My first console app simply obtained the GUID of the document library using the
GetListCollection() method of the Lists web service. The GUID was then
passed to the GetListItems() method which duly provided all documents and folders
at the top level of the document library. It then seemed logical to me to
recursively call the GetListItems() method using the GUID of each sub-folder.
On no, how wrong could I be! The GetListItems() method simply chokes on these
folder GUIDs.

On searching the internet I found many other incorrect forum posts and blog
entries about the same topic – but no working solutions. I also made an
extensive search of my eBook collection – but again no solutions – which
overall motivated me to write this blog entry.

The solution - RTFM
Well, if I had read the whole page in the manual, I would have got to the
solution earlierCrying. The key to my puzzle was
the QueryOptions XML fragment which
has both a Folder element and the all important <ViewAttributes
Scope="Recursive" /> element. Using these elements together, it is
possible to obtain a list of all documents in all subfolders in the list.
Indeed, it does not even bother returning the subfolder details!

So here is the code for my working C# sample.

using System;

using System.Collections.Generic;

using System.Text;

using System.Xml;

using System.Web.Services;

using System.Web;

using System.Net;

namespace ConsoleApplication1

class Program

static void Main(string[] args)

string siteUrl = @"http://yourserver/sites/yoursite";

string documentLibraryName = @"Shared

wsList = new SharePointList.Lists();

= System.Net.CredentialCache.DefaultCredentials;

proxyObj = new WebProxy("yourproxy", 80);

= proxyObj;

= siteUrl + @"/_vti_bin/lists.asmx";

// get a list of all top level lists

allLists = wsList.GetListCollection();

// load into an XML document so we can use XPath to query

allListsDoc = new XmlDocument();


// allListsDoc.Save(@"c:\allListsDoc.xml"); // for debug

ns = new

ns.AddNamespace("d", allLists.NamespaceURI);

// now get the GUID of the document library we are looking

dlNode = allListsDoc.SelectSingleNode("/d:Lists/d:List[@Title='"
+ documentLibraryName + "']", ns);

if (dlNode == null)


Library '{0}' not found!"
, documentLibraryName);




// obtain the GUID for the document library and the webID

string documentLibraryGUID = dlNode.Attributes["ID"].Value;

string webId = dlNode.Attributes["WebId"].Value;

folder '{0}' GUID={1}"
, documentLibraryName, documentLibraryGUID);

// create ViewFields CAML

viewFieldsDoc = new XmlDocument();

ViewFields = AddXmlElement(viewFieldsDoc, "ViewFields",






//viewFieldsDoc.Save(@"c:\viewFields.xml"); // for debug

// create QueryOptions CAML

queryOptionsDoc = new XmlDocument();


Click here to read full article.

Is the above link useful to you? Let us know your feedback, it will help us to improve our posting(s). or You can send your feedback linkOblast.

Deploying Assembly in GAC vs BIN

There are many differences between these two methods...

Read it here


  • If you are frequently updating the Assembly,
    then you always better to deploy it in BIN.  Since, Assembly will be
    reloaded automatically just after the updating.    But when you update
    the Assembly in GAC, you have to restart the IIS (IISRESET) to grab the
    new version. The reason is GAC keeps the assembly in Cache.

  • When you deploy your Assembly on GAC then you can access
    the Assembly from any SharePoint web application. But when you deploy
    Assembly in web application’s BIN folder, then it can only access from
    the given web application. Anyway, if you have all-purpose web part,
    you better to deploy it in GAC and avoid the multiple Assembly
    deployments in BIN. 

  • If you have multiple versions of same Assembly, then you
    have to deploy it in GAC. Coz, GAC manages the multiple version of
    given Assembly, but BIN doesn’t.

Click here to read full article

CADIE : Google Cognitive Autoheuristic Distributed-Intelligence Entity

Don't know CADIE.... go thru the article below  or click here to read the full article



March 31st, 2009 11:59:59 pm

Introducing CADIE

Research group switches on world's first "artificial intelligence" tasked-array system.

several years now a small research group has been working on some
challenging problems in the areas of neural networking, natural
language and autonomous problem-solving. Last fall this group achieved
a significant breakthrough: a powerful new technique for solving
reinforcement learning problems, resulting in the first functional
global-scale neuro-evolutionary learning cluster.

Since then progress has been rapid, and tonight we're
pleased to announce that just moments ago, the world's first Cognitive
Autoheuristic Distributed-Intelligence Entity (CADIE) was switched on
and began performing some initial functions. It's an exciting moment
that we're determined to build upon by coming to understand more fully
what CADIE's emergence might mean, for Google and for our users. So
although CADIE technology will be rolled out with the caution befitting
any advance of this magnitude, in the months to come users can expect
to notice her influence on various properties. Earlier
today, for instance, CADIE deduced from a quick scan of the visual
segment of the social web a set of online design principles from which
she derived this intriguing homepage.

are merely the first steps onto what will doubtless prove a long and
difficult road. Considerable bugs remain in CADIE'S programming, and
considerable development clearly is called for. But we can't imagine a
more important journey for Google to have undertaken.

If you cant udnerstand just press Ctrl + A and read here : look at the time of anoucement (its March 31st, 2009 11:59:59 pm)