Monday, March 31, 2008

CAMP BY FSF-AP CHAPTER

How Microsoft killed ODF

Hasn't anyone learned anything over the last few years. It doesn't matter if OOXML is approved or not. All that matters is that the process that gave ODF it's international standing is ruined. ODF got where it is today because it is an international standard, not because it is necessarily the answer to every possible question. People believed in the ISO process and believed that a standard with their seal of approval was actually worth something in the real world. By badgering, bribing and threatening, Microsoft has effectively destroyed the ISO process. So who cares if OOXML becomes a standard or not? No one if there isn't gold standard for it to be judged against. While ODF was a saint, the sinner of OOXML looked very dark and shabby. Now Microsoft has cast doubt on the lineage of ODF everyone is a sinner. If you will excuse an awful analogy, which would you prefer to eat, ice cream or sawdust? Easy choice eh! Now everyone knows that the ISO process can be corrupted the choice can then be portrayed as one dodgy standard versus another. So, what do you want to eat now, sawdust or coal? Not as clear cut any more is it! As soon as one national body fell to the manipulation of Microsoft, OOXML had won. In the world of FUD and dirty deals Microsoft is king. It's made a career out of muddying the waters to hide it's own inadequacies and inconsistencies. There is ( or maybe these days, was ) a saying that says 'My country right or wrong'. For some ISO members maybe we should change it to 'My company, right or wrong'


In this whole sordid tale, some people stood above the crud and for that they should be saluted... and some didn't ....

ref:fsdaily.com
post by rakesh

Tuesday, March 25, 2008

What is Copyleft?

Copyleft is a general method for making a program or other work free, and requiring all modified and extended versions of the program to be free as well.

The simplest way to make a program free software is to put it in the public domain, uncopyrighted. This allows people to share the program and their improvements, if they are so minded. But it also allows uncooperative people to convert the program into proprietary software. They can make changes, many or few, and distribute the result as a proprietary product. People who receive the program in that modified form do not have the freedom that the original author gave them; the middleman has stripped it away.

In the GNU project, our aim is to give all users the freedom to redistribute and change GNU software. If middlemen could strip off the freedom, we might have many users, but those users would not have freedom. So instead of putting GNU software in the public domain, we ``copyleft'' it. Copyleft says that anyone who redistributes the software, with or without changes, must pass along the freedom to further copy and change it. Copyleft guarantees that every user has freedom.

more>>>
posted by rakeshkumar
ref: fsdaily
csestuff.co.cc

Saturday, March 22, 2008

India votes NO for OOXML

After a colossal amount of debate and discussion over the last one year, India has finally voted NO for OOXML. Today the committee was asked "Should India change its September 2007 No vote into Yes?"

13 members voted No
5 members (including Microsoft, of course) voted Yes.
1 member abstained
3 did not attend

The government bodies, academic institutions and industry voted against OOXML. The only people who voted for OOXML were the software exporters--TCS, Infosys, Wipro and NASSCOM (National Association of Software Services Companies).

posted by rakeshkumar
www.close2job.com
ref: osindiablogspot.com

Impossible thing #2: Comprehensive free knowledge repositories like Wikipedia and Project Gutenberg

Project Gutenberg, started in 1971, is the oldest part of the modern free culture movement. Wikipedia is a relative upstart, riding on the wave of success of free software, extending the idea to other kinds of information content. Today, Project Gutenberg, with over 24,000 e-texts, is probably larger than the legendary Library of Alexandria. Wikipedia is the largest and most comprehensive encyclopedic work ever created in the history of mankind. It’s common to draw comparisons to Encyclopedia Britannica, but they are hardly comparable works—Wikipedia is dozens of times larger and covers many more subjects. Accuracy is a more debatable topic, but studies have suggested that Wikipedia is not as much less accurate than Britannica as one might naively suppose.

posted by rakesh
www.close2job.com
ref: freesoftwaremagazine.com

Making the impossible happen: the rules of free culture

In the mainstream, free culture is regarded with varying degrees of skepticism, disdain, and dewy-eyed optimism. It violates the “rules” by which we imagine our world works, and many people react badly to that which they don’t understand.

If the system of rules that we have based our entire industrial civilization on are wrong, will we have to face the prospect of re-ordering that society from the ground up? Will that civilization now collapse (like Wile E. Coyote falling once he notices there’s no ground underneath him)?

On the opposite extreme, for those who’ve given up on the rationalizations, preferring a “faith-based” approach, there is a great tendency to leap to magical thinking. Perhaps there are gods of freedom reordering the world to make it a happier place? If we shake our rattles hard enough, will all our dreams come true?

But where is genuine reason in all this? Here, I’ll present six “impossible” acheivements of free culture, each representing a particular challenge to the old paradigm. Then I’ll present a set of rules to help understand “how the magic works”, and give a more realistic framework for what can and can’t be expected from the commons-based methods on which free culture operates.

more>>
posted by rakeshkumar
www.close2job.com
ref:freesoftwaremagazine.com

Friday, March 14, 2008

Free-software lawyers: Don't trust Microsoft's Open XML patent pledge

Prominent legal counsel the Software Freedom Law Center said that the legal terms covering Microsoft's Open XML document formats pose a patent risk to free and open-source software developers.

The SFLC on Wednesday published a legal analysis of Microsoft's Open Specification Promise (OSP), a document written to give developers the green light to make open-source products based on specifications written by Microsoft.

The OSP is meant to allay concerns over violating Microsoft patents that relate to Open XML, Microsoft's document specifications that the company is trying to have certified as a standard at the ISO (International Organization for Standardization). For example, a company could create an open-source spreadsheet or server software that can handle Open XML documents.

Microsoft is awaiting the results of a crucial vote, expected by March 29, from representatives of national standards bodies.

But the SFLC said that the OSP is not to be trusted. It said that it did the legal analysis following the close of a recent Ballot Resolution Meeting held to resolve problems with the Open XML specification.

Specifically, the SFLC concluded that the patent protections only apply to current versions of the specifications; future versions could not be covered, it noted.

Also, software developers who write code based on a Microsoft-derived specification, such as Open XML, could be limited in how that code is used. "Any code that implements the specification may also do other things in other contexts, so in effect the OSP does not cover any actual code, only some uses of code," according to the analysis.

Finally, the SFLC said that OSP-covered specifications are not compatible with the General Public License (GPL), which covers thousands of free and open-source products.

Most open-source software advocates have opposed Microsoft's effort to standardize Open XML and the SFLC is no exception.

While not attempting to clarify the text of the OSP to indicate compatibility with the GPL or provide a safe harbor through its guidance materials, Microsoft wrongly blames the free software legal community for Microsoft's failure to present a promise that satisfies the requirements of the GPL. It is true that a broad audience of developers could implement the specifications, but they would be unable to be certain that implementations based on the latest versions of the specifications would be safe from attack. They would also be unable to distribute their code for any type of use, as is integral to the GPL and to all free software.

As the final period for consideration of OOXML by ISO elapses, SFLC recommends against the establishment of OOXML as an international standard and cautions GPL implementers not to rely on the OSP.

ref: fsdaily
rakeshkumar
close2job.com

Thursday, March 13, 2008

Microsoft's Open Specification Promise: No Assurance for GPL

There has been much discussion in the free software community and in the press about the inadequacy of Microsoft's Office Open XML (OOXML) as a standard, including good analysis of some of the shortcomings of Microsoft's Open Specification Promise (OSP), a promise that is supposed to protect projects from patent risk. Nonetheless, following the close of the ISO-BRM meeting in Geneva, SFLC's clients and colleagues have continued to express uncertainty as to whether the OSP would adequately apply to implementations licensed under the GNU General Public License (GPL). In response to these requests for clarification, we publicly conclude that the OSP provides no assurance to GPL developers and that it is unsafe to rely upon the OSP for any free software implementation, whether under the GPL or another free software license.

for more

ref: fsdaily
posted by rakeshkumar
www.close2job.com

Tuesday, March 11, 2008

Vitualization............

What is Virtualization?
Virtualization is a proven software technology that is rapidly transforming the IT landscape and fundamentally changing the way that people compute.

Today’s powerful x86 computer hardware was originally designed to run only a single operating system and a single application, but virtualization breaks that bond, making it possible to run multiple operating systems and multiple applications on the same computer at the same time, increasing the utilization and flexibility of hardware.

Virtualization is a technology that can benefit anyone who uses a computer, from IT professionals and Mac enthusiasts to commercial businesses and government organizations. Join the millions of people around the world who use virtualization to save time, money and energy while achieving more with the computer hardware they already own.

How Does Virtualization Work?
In essence, virtualization lets you transform hardware into software. Use software such as VMware ESX Server to transform or “virtualize” the hardware resources of an x86-based computer—including the CPU, RAM, hard disk and network controller—to create a fully functional virtual machine that can run its own operating system and applications just like a “real” computer.

Multiple virtual machines share hardware resources without interfering with each other so that you can safely run several operating systems and applications at the same time on a single computer.


ref:vmware
posted by www.close2job.com
rakesh kumar

Sunday, March 9, 2008

Virtualization era

QEMU is a processor emulator that relies on dynamic binary translation to achieve a reasonable speed while being easy to port on new host CPU architectures. In conjunction with CPU emulation, it also provides a set of device models, allowing it to run a variety of unmodified guest operating systems, thus is can be viewed as a hosted virtual machine monitor. It also provides an accelerated mode for supporting a mixture of binary translation (for kernel code) and native execution (for user code), in the same fashion as VMware Workstation and Microsoft Virtual PC. Qemu can also be used purely for CPU emulation for user level processes, in this mode of operation, it is most similar to valgrind.

Features

* Supports emulating IA-32 (x86) PCs, AMD64 PCs, MIPS R4000, Sun's SPARC sun4m, Sun's SPARC sun4u, ARM development boards (Integrator/CP and Versatile/PB), SH4 SHIX board, PowerPC (PReP and Power Macintosh), and ETRAX CRIS architectures.
* Support for other architectures in both host and emulated systems (see homepage for complete list).
* Increased speed — some applications can run in close to real time.
* Implements Copy-On-Write disk image formats. You can declare a multi-gigabyte virtual drive, the disk image will only be as large as what is actually used.
* Also implements overlay images. You can keep a snapshot of the guest system, and write changes to a separate image file. If the guest system breaks, it's simple to roll back to the snapshot.
* Support for running Linux binaries for other architectures.
* Can save and restore the state of the machine (programs running, etc.).
* Virtual network card emulation.
* SMP support.
* Guest OS does not need to be modified/patched
* Performance is improved when the KQEMU kernel module is used.
* Command line tools allow a full control of QEMU without having to run X11.
* Remote control of emulated machine via integrated VNC server
* USB tablet support — this provides "grabless" mouse control. Activated with "-usb -usbdevice tablet".

How to love Free Software in 3 steps: configure, make, make install

Tinkering with system core files, first aid kit

Let’s take a common example: you completely upset Windows XP’s core system DLL. Surprisingly, the OS still works. Explanation: the system dynamically replaces modified/removed core files from a hidden backup cache. This system already existed in Windows 2000, but in XP it covers pretty much all base install files (including Messenger). However, try removing all copies of the file you want to modify, all at the same time, refuse to restore the file from CD and see the system crash and burn.

In effect, for Joe User, you can’t corrupt the system because you are actively prevented from tinkering with it, and the system automatically reverts anything you try to do to it while it’s running. Moreover, the fact that Windows locks down opened files makes it difficult to really put the system down before a reboot (well, in that specific case anyway).

Under GNU/Linux and xBSD: when you update a system file, create a backup. Also, it’s a good idea to learn what a minimal booting system requires. Add to that, there is no way you can’t restore your system from a boot CD if you’ve kept a backup of your modified files: there’s no checksum of the ‘correct’ files stored in a registry somewhere that would prevent you from restoring backup files. Last but not least, most package managers allow you to ask for a package reinstall which will reset all its settings to default.

Finally, library versioning under UNIX-like systems is quite developed: not only can you host several versions of a library, soft links and rules for dynamic linking allow you to create a special version of a library which will be linked to by a single software, without much trouble.

In short, there is little reason you can trash a Linux system in an unrecoverable way even if you tinker with system files, except if you go at it as root, with a hammer and matching subtlety.

Now though, it’s not GNU’s or BSD’s or (usually) Windows’ fault if you trash the hard disk.
Tinkering with partitions, the pitfalls

There are three great sources of damage to GNU/Linux partitions:

*

outdated boot manager data (LILO or GRUB); it usually happens after a kernel update not followed by a GRUB or LILO refresh
*

badly enumerated partitions; it usually happens when removing, resizing and moving partitions on a complex layout disk,
*

overwritten Master Boot Record; it usually happens when you install Windows XP or Vista (Windows 2000 is a better citizen here).

The pitfalls are various, and can indeed make one wonder. However, at least with GNU/Linux you can hope for a recovery, while an OS like Windows will often require a reinstall (a cloned partition of mine insisted on calling itself ‘F:' after restore, no way to boot the system to correct that, and the registry hives all got corrupted).

The first problem is easy to avoid: keep a working kernel installed as long as you’re not sure the second one works, and always update LILO or GRUB after you’ve tinkered with kernels.

For the second case, if you start resizing, destroying and creating partitions all over the place, make sure you have an efficient LiveCD on hand (Knoppix being a reference): not only it is a good recovery tool, it’s also better not to work on a ‘live’ system (not that it’s impossible, just that it saves you from juggling with chroot all the time): it will allow you to revert partition changes and/or update your ‘live’ /etc/fstab file in a matter of minutes. Moreover, once you’re editing this file, several options are open to you.

You have two ways to address a peripheral in /etc/fstab to mount it: either you call it through /dev (like /dev/sda1), or you use its UUID: the latter is much harder to write off the top of your head, but on the other hand it makes using a roaming GNU/Linux system much easier. Moreover, it doesn’t fall prey to partition resizing and movement troubles as easily as the /dev path method.
Crush the kernel

Some distributions allow much easier tinkering with the kernel than others: Mandriva for example allows you to build “vanilla” kernel sources with a few command lines, while Ubuntu is much more painful because it won’t automatically build the kernel’s RAMdisk image that contains required modules for boot. You can find more information on your distribution’s forums (on top of that, forum posts from one distribution may apply to another; on the matter of compile time options, the Gentoo forum is a gold mine).

When it’s a matter of adding kernel modules, the system gives you enough warnings before you do something stupid, to prevent you from crashing your system:

*

if the module isn’t provided with the vanilla kernel, it may not be very stable;
*

if the module isn’t provided with the distribution’s kernel, it is probably quite unstable;
*

if dmesg returns kernel version mismatch on module load, it may not even work at all;
*

if dmesg returns symbol mismaches, you’re trying to fit a square peg in a round hole.

After that, if you insist on loading the module and force it, pray. Just, pray. Hard.
Compilation: when to do it

There are 3 cases in which you may want to compile a package from source:

1. it isn’t provided by your distro, and you need it (have you checked all available sources? Alternate repositories?)
2. the provided version is old, or buggy, or slow
3. the provided version hasn’t been compiled with the options you want

For 1, you could try to get a package from another distro: alien and smart even allow you to use Debian packages on Red Hat packaged distros and vice versa, so where’s the point? Except when you just know that the package is easy to compile (but then, you wouldn’t read these lines and be done already).

For 2, well, same as option 1. If you want to play with fire, be ready to get burnt: backups, backups, backups. Of course, a properly set up kernel source can give your machine a sensible performance boost (disabling multiple processors support on single core systems, compiling with specific processor options instead of generic i586 for 32-bit systems, disabling all debugging options, those may be worth it; but the rest isn’t worth the trouble).

For 3, you could get your distro’s source package: it should be provided with original build instructions, and at least you’ll be sure that other installed software won’t baulk.

In short: do you really need to compile from source?
Compilation 2: what to do, what not to do

First, Read The Friggin’ Manual! It could be the README or INSTALL file provided with most source packages. If the package isn’t provided with a configure file, you have two options:

*

either the tarball contains an experimental snapshot of the software, and you’ll need to build the makefile yourself, with automake or cmake; READ the instructions to know which is required and recommended for your system!
*

or the tarball doesn’t require a makefile (the source files perform checks themselves, or the build system is preset, not requiring a configure script): you can skip to the make step.

If you can’t find any build instruction, try running make right away. If it doesn’t work, make will tell you what it’s missing. Go back to the top, lather, rinse, repeat.
Start configuration

As said before, you’ll probably find build instructions inside the tarball, one way or another. If you have a configure script but no manual, try ./configure --help to get a list of options. Use the ones you need (for example, many programmers compile to put compiled binaries inside /usr/local, but in many distributions you actually want to put everything inside /usr), and get started.

For example, ./configure --prefix=/usr will set up the system to put everything inside your ‘main’ system directories. For a first time compilation, it’s not recommended: put them somewhere else instead (like /usr/test, or something), you’ll later use soft links to make use of them instead of already installed libraries.

The script will run; look at it attentively, even the least customized one will tell you what it’s looking up. Once the process is finished, try to install as many of the missing packages (both libraries and development packages) as you deem necessary, and run ./configure again. Again, check it out closely. Some are more verbose than others, but at least you make sure that you won’t be missing too much right from the start.

Once there, it’s time for make: watch outputs closely, as an aborted compilation will, more often than not, be preceded by log messages like ‘line XXX: undeclared variable #something’. If you get several of them, it usually indicates a missing library not covered by ./configure (if it actually happens, it’s a good idea to inform the developer about it); install the library with the name closest to the missing variable, and try again. Now it should work.

Lather, rinse, repeat. Eventually, you’ll get a built package. make install usually covers the rest (just remember what prefix you’ve set up at the ./configure step), but I don’t recommend it right away: run the local binary first, and see if it works. Moreover, look up what files may be replaced: if possible, uninstall conflicting packages first.
Compilation done, what next?

Well, if you reach that point, you’re as far along as Bill Gates was when he shipped Windows 95 (‘it compiles! Quick, ship it!’). Next step, use your new system library. If you’ve followed my advice but couldn’t remove the package (too many dependencies), you now have an original library in use, and an unused custom one; move to where you want to put the file. Rename the old one (mv libthingie.so libthingie.so.old), and create a symbolic link to your new one (ln -s /usr/test/lib/libthingie.so .), run ldconfig and (re)start one piece of software that makes use of this library. If it works, restart all other processes. If it crashes, undo what you did and restore the older library (rm libthingie.so && mv libthingie.so.old libthingie.so) and check again that you compiled your library correctly (be careful about 32/64-bit, the failing process will complain about symbol mismatch).

If you intended to build a kernel module (say, the highly unstable mach64 DRM module), then following the manual just works: all you need to do is git extract, make, then manually copy both drm.ko and mach64.ko to your kernel tree. Just don’t forget to make a backup of the original drm.ko (or drm.ko.gz) file somewhere safe and to run depmod -a after copy. gzip them if you want, then try modprobe mach64 and look at dmesg | tail’s output to be sure there is no error.

Right then, you can restart Xorg and see if DRI is enabled: on some systems, it’ll load Xorg and composited GNOME, KDE or Xfce fine, until you try to display texture mapped polygons; on others, it’ll give you a black screen and sometimes a nice hard system hang, so it would have been a good idea to set initial init level in /etc/inittab to 3 (Mandriva, SuSE) or 1 (Ubuntu), or to have a backup kernel image somewhere.
Conclusion

I’m not qualified as a programmer; to be frank, I’ve never typed a line of C/C++ in my life. However, I’ve successfully built several packages from sources, by taking to heart a few simple instructions:

* read the manual
* read the manual again, because some stuff at the beginning may make more sense
* read the release files, as they sometimes contain contradictions with the manual, but at least you’ll know what to expect
* check out compile options, as they sometimes contain more precisions or the latest build syntax
* read the manual one more time, because you won’t remember it very well byt this stage
* read script and compilation options, there could be some stuff hidden here
* make backups, and keep a LiveCD handy.

Once you’ve gotten used to it, compiling a package from sources (be it the kernel, a module, an Xorg driver, a library, whatever) and making use of it is no more difficult than reading a cooking recipe for making a pizza: just don’t forget to read it and to keep flour on your hands, otherwise the dough will stick


reference :fsdaily
published by rakeshkumar
www.close2job.com

Saturday, March 8, 2008

ABOUT INDYAROCKS

About Indyarocks - Indyarocks is the fastest growing online and mobile community for Indians across the globe. Here you can create your profile, meet and make friends, share photos, blogs, post classifieds, catch up with the latest movie buzz and much more..
Who all can join indyarocks
# Friends
# Families
# Classmates
# Co-workers
# Professionals
# Artists
# Organizations and
# Everyone who wants to be in regular touch with their friends

The free software movement is a political cause, not a technical one.

The free software movement is a
political cause, not a technical one. "Choose based on technical
criteria first of all" is the opposite of what we say.

There are many reasons why GNU packages should support other GNU
packages.

The GNU Project is not just a collection of software packages. Its
intended result is a coherent operating system. It is particularly
important therefore that GNU packages should work well with other GNU
packages. For instance, we would like Emacs to work well with git or
mercurial, but we especially want it to work well with Bzr.

The maintainers of one GNU package should use other GNU packages so
they will notice whether the packages work well together, and make
them work well together.

We also promote use of other GNU packages in this way.
Other people don't necessarily see which editor you use,
but they all see what dVCS you use.

regards
rakesh

Wednesday, March 5, 2008

Too Many Patents? How Patent Inflation Plagues Information Technology

In 2004, Brandeis economist Adam Jaffe and Harvard Business School professor Josh Lerner published Innovation and its Discontents: How Our Broken Patent System is Endangering Innovation and Progress, and What to Do About It - a rare book on patents and written for generalists, not patent lawyers. "Broken" is strong language, but it gets attention.

Jaffe and Lerner argue that patents had become too easy to get and too powerful:

[W]e converted the weapon that a patent represents from something like a handgun or a pocket knife into a bazooka, and then started handing out the bazookas to pretty much anyone who asked me for one, despite the legal tests of novelty and non-obviousness. [p. 35]

They attribute this to two congressional decisions: creating a specialized patent appeals court, the Court of Appeals for the Federal Circuit, in 1982; and putting the Patent and Trademark Office (PTO) on fee-funded basis.

Under the intellectual leadership of former patent attorney Giles Rich, the Federal Circuit spent much of its first 16 years enhancing the prospects of patent applicants and patent holders. The highwater mark was the notorious 1998 State Street decision, which Rich authored and which summarily eliminated the longstanding exclusion of patents for business methods. (1) Suddenly, patents were no longer limited to technology but available for any form of human activity.

By tying the PTO's budget to the fees it collected, Congress would inspire a new PTO mission, "to help customers get patents." Under the fee structure prescribed by Congress, the agency lost money on examinations but made money from issuance and maintenance fees. This internal cross-subsidy gave the agency an incentive to grant patents rather than deny them. (2) It embraced the flood of non-technological patents that followed State Street, arguing in international harmonization negotiations that allowing patents for all activities, not just technology, was "best practice." With support only from patent organizations, the U.S. delegation threatened to walk out of the negotiations if other governments did not go along. (3)

Patents on Intangibles

While there were rumblings in Congress following the State Street decision, the 54-person board of the Intellectual Property Owners Association resolved unanimously that Congress should keep its hands off business method patents. For good measure, it passed the resolution again the following year.4 But were the board members speaking for upper management -- or for corporate patent departments? Even IBM signed on -- although IBM also went on record opposing business method patents, noting the fundamental problem with patents on intangibles: "[W]ith the advent of business method patenting it is possible to obtain exclusive rights over a general business model, which can include ALL solutions to a business problem, simply by articulating the problem." (5)

But patent institutions have a natural self-interest in expanding the scope and scale of the patent system. As one treatise puts it:

[B]road notions of patent eligibility appear to be in the best interest of the patent bar, the PTO, and the Federal Circuit [CAFC]. Workloads increase and regulatory authority expands when new industries become subject to the appropriations authorized by the patent law. Noticeably absent from the private, administrative and judicial structure is a high regard for the public interest. (6)

For similar reasons, the patent bar has also favored low standards of patentability. When the Supreme Court heard oral arguments in KSR International v. Teleflex, the attorney for Teleflex rushed to the defense of the Federal Circuit's low standard: "[R]emember, every single major patent bar association in the country has filed on our side". To which Justice Scalia countered that a low standard "produces more patents, which is what the patent bar gets paid for, to acquire patents, not to get patent applications denied but to get them granted. And the more you narrow the obviousness standard ... the more likely it is that the patent will be granted." Indeed, 40 years previously in Graham v. Deere, the Supreme Court was called on to interpret the standard in the 1952 Patent Act. Then, too, the patent bar lined up to claim that Congress had lowered the standard. Then, too, the Supreme Court disagreed.

The brief effort to rein in business method patents in 2000-01 was stymied not only by the unified voice of the corporate patent departments, as well as the instant constituency of new patent holders and applicants. No reform bill introduced in the last few years has dared touch on subject matter limitations. However, the current House bill was amended to include a provision against patents on tax-avoidance strategies, a particularly obnoxious intrusion of the patent system into a very different policy domain. And a narrow provision restricting remedies for infringement of check imaging patents was added to the Senate bill.

The Supreme Court last addressed abstract subject matter in 1981, since which the Federal Circuit has made virtually anything patentable. Yet information technology has transformed the U.S. economy, not by identifiable patents, but by a powerfully enabling stack of open, unpatented protocols that we know as the Internet and the Web. (7)

Not until 2005 did the Court revisit patentable subject matter by agreeing to review Labcorp v. Metabolite, a case involving a medical diagnosis rather than software or methods. But the Court took the unusual step of reversing course and choosing not to decide the case, although three judges dissented against the decision not to decide, making it clear they would have rejected patentability.

Why has it taken 26 years and counting for the Court to focus on the critical distinction between abstract ideas and patentable subject matter?

Few litigants want to raise this issue before the Federal Circuit, since State Street seemed to state so strongly that anything is patentable as long as it's useful. Why stick your finger in the eye of the appeals court if you've got a fighting chance on other issues? Why risk ostracism from your brethren by advocating limitations on the scope and status of the profession?

Nonetheless, the inter-industry tensions over reform put the subject matter issue in a new light. AT&T v. Microsoft dealt with an obscure provision of the patent code concerning foreign assembly of components to create products that would infringe in the U.S. -- and whether this applied to reproducing software on media from a master disk. Eli Lilly filed an amicus brief blaming the whole controversy on the Federal Circuit's allowance of patents on intangible subject matter, signed by Eli Lilly's chief patent counsel, a past president of the American Intellectual Property Law Association.

Why would a drug company question patents for intangibles? It is no coincidence that the push for strong patent reform was originally spearheaded by the Business Software Alliance, and strong reform is supported by the financial services sector. Take away patents on intangibles, and much of the momentum behind reform evaporates. In the interests of preserving a unitary patent system in the traditional pharmaceutical model, it makes sense to lop off any outlying troublemakers, such as software and business methods. Although removing patents on intangibles would eliminate a vast source of income for patent professionals, the system would then remain narrowly focused on process in the hands of patent professionals -- and less pressured by the interests of nontraditional sectors.

Portfolio Patenting

It would not be easy for a field as diverse as "software" to agree to opt out, given the accumulation of patents at different levels of abstraction and the proliferation of business models, some of which are more patent-dependent than others. When the Patent and Trademark Office held hearings in 1994, almost all pureplay software publishers (with the notable exception of Microsoft) expressed opposition to software patents. But since then, all have amassed their own patent portfolios, giving them broad protection in the market niche they have traditionally occupied

Portfolios turn the mythology of the patent system upside down. The policy justification of portfolio patenting in IT, expressed by Thinkfire CEO Dan McCurdy as "net users pay net innovators," makes for rough justice. However, it is different from the classic case for individual patents. Instead of protecting the upstart inventor armed with a patent, the system protects established companies who have had the time and resources to assemble substantial portfolios that function as renewable "thickets" to keep incumbents ensconced and to discourage new entrants from assembling full-blown products.

But there is a downside for established companies, too, that has recently become clear. The same conditions that allow them to amass vast portfolios easily also provide fertile ground for trolls. Lots of easy-to-get patents ensure that some will end up in the speculators, some of whom will get lucky and find their patent is deeply embedded in the complex technology of a successful product.

Portfolio-driven patenting is not unique to software. It pervades IT and, to a lesser extent, other complex technologies, but anybody can generate patentable functionality in software. Software democratizes innovation. Writing software requires no laboratory, no PhD, no manufacturing plant, no distribution chain. Meanwhile, low standards, the presumption of entitlement, and the desire to impress supervisors, upper management, and venture capital induce the filing of tens of thousands of patents each of which may have dozens of claims.

The flipside of massively dispersed patent ownership is massively dispersed liability. Patents of failed companies often end up in the hands of trolls who are neither innovators, nor producers, nor users -- and have no need to license rights from others. Can those loose patents be avoided? At what cost? -- not only to identify problem patents but to figure how good they are, who owns them, and under what terms they might be available.

Patent thickets impose huge costs because they require the assistance from lawyers - quite apart from the costs of acquiring rights or designing around them. The tactically correct solution is not to search but to task lawyers to solve problems only if and when they arise. (8) At the same time, this jungle of rights and miasma of too much information to decipher and interpret creates cover for trolls. They can hide until producers and users have made huge investments in arguably infringing products. For trolls, patents are lottery tickets: if they are lucky, they will be infringed by a deep-pocketed producer. For producers, it's a risk of an aberrant judgment which can perhaps be averted by flinging enough legal resources against it.

Other than anecdotal evidence, including the sad experience of the insurance industry, (9) it is virtually impossible to get a direct handle on these risks. However, new research by James Bessen and Michael Meurer, soon to be published in a book, Patent Failure, does so indirectly. (10) By examining market reaction to patent litigation, they show how investors view the risks and costs of patents imposed on different sectors. For software and business methods, these are very high indeed.
*****

Microsoft Rises to Sixth on Patent List for 2007

Microsoft was awarded more than 1,600 patents by the U.S. Patent and Trademark Office (USPTO) in 2007, placing it sixth on the list of biggest patent performers, according to IFI Patent Intelligence, which tracks patent awards. IBM, which tried but failed to patent outsourcing last year, won the patent count for the 16th straight year, with more than 3,100 patents.

One way to gauge the level of innovation occurring in the IT industry is to count the number of patents awarded to companies. Since the organizations getting the most patents year after year tend to be developers of hardware and software for businesses and consumers (except for the occasional car maker, such as Honda Motor, which ranked 19th in 2007), this would seem to be a fairly accurate way to tell who has the most creative and productive research and development departments. (IBM's attempt to patent outsourcing was quite creative, but it wasn't productive.)

However, since the USPTO stopped publishing the list of companies receiving the most patents last year, under the assumption that focusing on patent counts was a poor way of gauging creativity, interested parties must now count the patents themselves. Or, if they have better things to do, they can turn to IFI Patent Intelligence, an outfit out of Wilmington, Delaware, to do the heavy counting.

According to IFI's analysis, Microsoft was awarded 1,637 patents last year, nearly a 12 percent increase in the number of patents it received in 2006, when it was number 12 on the list.

Microsoft's increase in patents bucked the trend in patents last year, which saw nearly a 10 percent decline in the number of patents issued by the USPTO.

Darlene Slaughter, general manager of IFI Patent Intelligence, says the 157,284 utility patents issued last year was more or less in line with recent historical averages. "Although the total number of patents issued is down from 2006's record high, it did beat 2005's relatively low showing," she says. "Overall, it's fair to say that 80 percent of the top 35 organizations were down versus the previous year."

There is currently a huge backlog of patents pending, according to IFI. The most recent USPTO annual report shows there were more than 1.1 million patents pending for fiscal year 2007, which means that slightly more than 10 percent of patents applied for are actually granted.

Here's IFI's list of top 10 patent performers of 2007, followed by the number of patents they received:

* IBM--3,148
* Samsung Electronics--2,725
* Canon--1,987
* Matsushita Electric Industrial--1,941
* Intel--1,865
* Microsoft--1,637
* Toshiba--1,549
* Sony--1,481
* Micron Technology--1,476
* Hewlett-Packard--1,470