Noticias del Software
Esta sección refleja noticias de la industria que merecen destacarse para conocer el ámbito actual y proyectado de la industria del software en Argentina y en el Mundo.
La Ciencia De Fijar Precios Al Software
Fijar precios no es una ciencia exacta, pero tampoco es magia – es influenciada por percepción que se tenga de su software, las condiciones del mercado y su valor. ¿Entonces cuál es el proceso de encontrar el precio ganador?
Marketing de software
El blog tiene entradas referidas al marketing de productos y servicios de software.
sábado, 20 de julio de 2013
Que sabe la NSA acerca de vos
3:30
Juan MC Larrosa
No comments
Looking for an intuitive way to understand the kind of data the N.S.A. has been collecting on all of us? A team at MIT has developed a helpful graphic for GMail users.Immersion is a program that reads only the meta data from your email – precisely what the N.S.A. is collecting from telephone and internet records – and creates a visual web of interconnectedness between you and the people in your inbox.
What’s the big deal about collecting this information? If you’re of the mind to giveImmersion a try, you can get a sense of the kind of information it can reveal, particularly over time. According to The New Yorker‘s Jane Mayer, you don’t need to know the content of conversations to get the gist of what’s going on. Mayer’s post points out that you might make an appointment with a gynecologist, then an oncologist, and then you may make a series of calls to close family members and friends. What’s going on? It’s not hard to deduce that you’ve received a diagnosis of cancer. Likewise, journalists who count on anonymity to protect their sensitive sources can be outed easily with meta data. And lest you think you are carrying on an extramarital fling unnoticed, meta data can reveal that, too.
This type of intrusion is easy to minimize because meta data is not meaningful or even familiar to most people. Intuitively, we are more concerned with revealing the contentof our conversations. Yet if we are to fully understand the significance of this type of data mining, we must present the data in ways that hit home. Immersion is one such way. Check it out.
About the Author: Kalliopi Monoyios is the illustrator of several best-selling science books including Neil Shubin's The Universe Within, Shubin’s Your Inner Fish, and Jerry Coyne’s Why Evolution is True. Her illustration portfolio can be found at kalliopimonoyios.com. Follow her solo on Twitter at @eyeforscience. For tweets from the whole Symbiartic crew, Follow on Twitter @symbiartic.
viernes, 19 de julio de 2013
La National Security Agency americana introdujo Back Doors en todos los Windows desde 1999
3:30
Juan MC Larrosa
No comments
NSA Built Back Door In All Windows Software by 1999
Washington Blog
Government Built Spy-Access Into Most Popular Consumer Program Before 9/11
In researching the stunning pervasiveness of spying by the government (it’s much more wide spreadthan you’ve heard even now), we ran across the fact that the FBI wants software programmers to install a backdoor in all software.
Digging a little further, we found a 1999 article by leading European computer publication Heise which noted that the NSA had already built a backdoor into all Windows software:
A careless mistake by Microsoft programmers has revealed that special access codes prepared by the US National Security Agency have been secretly built into Windows.The NSA access system is built into every version of the Windows operating system now in use, except early releases of Windows 95 (and its predecessors). The discovery comes close on the heels of the revelations earlier this year that another US software giant, Lotus, had built an NSA “help information” trapdoor into its Notes system, and that security functions on other software systems had been deliberately crippled.The first discovery of the new NSA access system was made two years ago by British researcher Dr Nicko van Someren [an expert in computer security]. But it was only a few weeks ago when a second researcher rediscovered the access system. With it, he found the evidence linking it to NSA.***Two weeks ago, a US security company came up with conclusive evidence that the second key belongs to NSA. Like Dr van Someren, Andrew Fernandez, chief scientist with Cryptonym of Morrisville, North Carolina, had been probing the presence and significance of the two keys. Then he checked the latest Service Pack release for Windows NT4, Service Pack 5. He found that Microsoft’s developers had failed to remove or “strip” the debugging symbols used to test this software before they released it. Inside the code were the labels for the two keys. One was called “KEY”. The other was called “NSAKEY”.Fernandes reported his re-discovery of the two CAPI keys, and their secret meaning, to “Advances in Cryptology, Crypto’99″ conference held in Santa Barbara. According to those present at the conference, Windows developers attending the conference did not deny that the “NSA” key was built into their software. But they refused to talk about what the key did, or why it had been put there without users’ knowledge.A third key?!But according to two witnesses attending the conference, even Microsoft’s top crypto programmers were astonished to learn that the version of ADVAPI.DLL shipping with Windows 2000 contains not two, but three keys. Brian LaMachia, head of CAPI development at Microsoft was “stunned” to learn of these discoveries, by outsiders. The latest discovery by Dr van Someren is based on advanced search methods which test and report on the “entropy” of programming code.Within the Microsoft organisation, access to Windows source code is said to be highly compartmentalized, making it easy for modifications to be inserted without the knowledge of even the respective product managers.Researchers are divided about whether the NSA key could be intended to let US government users of Windows run classified cryptosystems on their machines or whether it is intended to open up anyone’s and everyone’s Windows computer to intelligence gathering techniques deployed by NSA’s burgeoning corps of “information warriors”.According to Fernandez of Cryptonym, the result of having the secret key inside your Windows operating system “is that it is tremendously easier for the NSA to load unauthorized security services on all copies of Microsoft Windows, and once these security services are loaded, they can effectively compromise your entire operating system“. The NSA key is contained inside all versions of Windows from Windows 95 OSR2 onwards.***“How is an IT manager to feel when they learn that in every copy of Windows sold, Microsoft has a ‘back door’ for NSA – making it orders of magnitude easier for the US government to access your computer?” he asked.
jueves, 18 de julio de 2013
Por que el mercadeo de celulares locales explota
3:00
Juan MC Larrosa
No comments
Why Local-Mobile Marketing Is Exploding
Location-based mobile marketing promises the sky: high conversion rates, surgical targeting, and rich consumer profiles.
But does it deliver? According to many accounts, it does.
Not surprisingly, retailers, brands, and agencies are scrambling to hone their location-based approaches. These encompass everything from "geo-aware" and "geo-fenced" ad campaigns, to hyper-local efforts keyed to Wi-Fi hotspots, and algorithmic location-based targeting of audience segments like soccer moms, bargain hunters, coffee enthusiasts, etc.
In a new report from BI Intelligence on location-mobile marketing, we take a look at key stats on the location-based services marketplace that indicate it's supremacy in mobile marketing, explain how the most important techniques (such as geo-aware, geo-fenced, audience-based local-mobile campaigns) work, examine the cornerstones - such as data and audience building - to a successful location-based mobile strategy, look at who has the valuable location-based data, and analyze the six most effective local-mobile marketing tactics.
Here's an overview of the location-mobile marketing explosion:
- Location is the new cookie: Collecting data has always been difficult because mobile does not support third-party cookies that travel easily across the ecosystem, allowing for straightforward tracking and data-gathering. That's where location-based mobile technology comes in. It gives marketers new ways to identify and track mobile audiences, and with the aid of algorithms, it can also group them into behavioral and demographic segments for targeting.
- Money is flowing into location-based mobile marketing: A recent survey of 400 brand executives by Balihoo found that 91% planned to increase their investments in location-based marketing campaigns in 2013. Finally, a study by Berg Insight found that location-enabled ad spend reached about 8% of total mobile ad spend for 2012. This proportion is expected to increase to 33% by 2017.
- Location-based data is driving much of the interest - and success: Enabling campaigns with local data produces measurable results. In a study of over 2,500 of its mobile marketing campaigns, Verve found that its location-based ad efforts were about twice as effective as the mobile industry average click-through rate (CTR) of 0.4%. Geo-aware ads, geo-fenced ads, and location data paired with audience demographics or purchase intent are all proving to be extremely successful.
- Location is extending beyond the smartphone: The location conversation may have started out as a way to take advantage of mobile phones, but as technology continues to evolve, the conversation needs to broaden. In 2012, only 12% of smartphone owners and 17% of tablet owners said they used their device throughout the entire shopping process. This year, one-third of smartphone and tablet owners said they did so. Additionally, more tablet consumers are beginning their shopping process on their tablets. This shows that location ads should be targeted to tablets as well as smartphones, because the first search for a local business might take place on a tablet.
miércoles, 17 de julio de 2013
Ley de Metcalfe: Reduciendo la cola larga de las redes sociales
3:30
Juan MC Larrosa
No comments
Guest Blogger Bob Metcalfe: Metcalfe’s Law Recurses Down the Long Tail of Social Networks
The blogosphere has started bubbling some interesting discussion of how Metcalfe’s Law applies to current Web 2.0 dynamics like social networking.Some IEEE types, Brad Feld, Niel Robertson, a PhD. student named Fred Stutzman, my partner Sim Simeonov, myself and a few others have posted on this in the last few weeks.
Bob Metcalfe, who invented the law in the first place and is my partner at Polaris (and who, along with Al Gore, invented the Internet…), offers his own view in a guest blog post below.
Metcalfe’s original insight was that the value of a communications network grows (exponentially, as it turns out) as the number of users grows.
All seem to agree that Metcalfe’s Law offers a good theoretical framework for thinking about Social Networks. Robertson argues that in addition to the number of users, the rank of a social network is another variable that should be considered when the law is applied to a social network as opposed to a communications network; Stutzman, on the other hand, suggests that one ought to add consideration of “the sum of actions and associations” enabled by a particular social network.
Not surprisingly, Metcalfe himself offers a more insightful and, I think, important contribution to the conversation — that to understand the value of a social network we need to consider not just the number of users but also the affinity between the members of the network.
Enjoy Bob’s post, and by all means please feel free to add your own comments…
Metcalfe’s Law Recurses Down the Long Tail of Social Networking
By Bob Metcalfe
Metcalfe’s Law is under attack again. This latest attack argues that the value of a network does not grow as the square of its number of users, V~N^2, like I’ve been saying for 26 years, but slower, V~N*Log(N). The new attack comes in a cover story by Briscoe, Odlyzko, and Tilly in a prestigious 385,000-member social network called IEEE SPECTRUM. And now they are saying that my law is not just wrong but also “dangerous.”
Below is the original not PowerPoint but 35mm slide I used circa 1980 to convince early Ethernet adopters to try LANs large enough to exhibit network effects – networks larger than some “critical mass.”
This slide was named “Metcalfe’s Law” by George Gilder in the September 1993 issue of FORBES and later in his book TELECOSM. Again, thank you, George.
Ethernet’s early adopters took this advice, and so my computer communication compatibility company, 3Com, prospered. Last year, according to IDC, 33 years after Ethernet’s invention at Xerox Parc, a quarter billion new Ethernet switch ports were shipped worldwide.
And now for some inconvenient truths. Al Gore famously claimed to have invented the Internet in the 1980s, which struck some of us as a little late. Like his father, Al Gore Senior, who claimed to have invented what is inexplicably called the Eisenhower Interstate Highway System, Al Gore Junior invented what he called the Information Superhighway. The actual Internet was invented, I think, either by BBN at UCLA in 1969 or at Stanford in 1973.
With his Information Superhighway, Gore invented not the Internet but the Internet … Bubble. I was present when Vice President Gore mentioned Metcalfe’s Law in an MIT commencement address, inflating his administration’s Internet Bubble. I helped Gore inflate the Internet Bubble by touting Metcalfe’s Law. I am not sorry.
There are people who think the Internet Bubble was the worst thing that ever happened, and I hope those people are satisfied now that Ken Lay is dead. To those people my law may be, as the SPECTRUM article says, dangerous.Because my law allegedly over-estimates the values of networks, it might be used to inflate a second Internet Bubble, probably the imminent Social Networking Bubble, which will then inevitably burst. Can’t have that.
So, in IEEE SPECTRUM, Briscoe, Odlyzko, and Tilly debunk Metcalfe’s Law, again. It turns out that the value of a network does not grow as the square of the number of its users, V~N^2, but much more slowly, V~N*log(N), they figure.Cold water can now be thrown on the promoters of social networking. The bursting of a second Internet Bubble is thereby averted.
In renewed defense of Metcalfe’s Law, let me first point out that Al Gore has moved on to the invention of Global Warming. If a second Internet Bubble is to be inflated, I will have to do it without Gore’s hot air this time… Let’s get started.
Let me contrast Metcalfe’s Law with
Moore’s Law. Moore’s and Metcalfe’s Laws are similar in that both begin with the letter M. They are different in that
Moore’s Law is exponential in time while Metcalfe’s Law is quadratic in size.
Moore’s Law. Moore’s and Metcalfe’s Laws are similar in that both begin with the letter M. They are different in that
Moore’s Law is exponential in time while Metcalfe’s Law is quadratic in size.
Moore‘s Law, which states that semiconductors double in complexity every two years, has been numerically accurate since 1965. Metcalfe’s Law, on the other hand, has never been evaluated numerically, certainly not by me.
Nobody, including Briscoe, Odlyzko, and Tilly in their SPECTRUM attack, has attempted to estimate what I hereby call A, network value’s constant of proportionality in my law, V=A*N^2. Nor has anyone tried to fit any resulting curve to actual network sizes and values.
As I wrote a decade ago, Metcalfe’s Law is a vision thing. It is applicable mostly to smaller networks approaching “critical mass.” And it is undone numerically by the difficulty in quantifying concepts like “connected” and “value.”
So, if the value of a network does grow as V~N*log(N), I challenge Briscoe, Odlyzko, and Tilly to prove it with some real network sizes and values. In the meantime, I’ll stick with V~N^2.
While they’re at it, my law’s critics should look at whether the value of a network actually starts going down after some size. Who hasn’t received way too much email or way too many hits from a Google search? There may be diseconomies of network scale that eventually drive values down with increasing size. So, if V=A*N^2, it could be that A (for “affinity,” value per connection) is also a function of N and heads down after some network size, overwhelming N^2. Somebody should look at that and take another crack at my poor old law.
But, if anybody wants to spend time on Metcalfe’s Law, let me suggest what are likely to be more fruitful paths. Accurate formulas for the static value of a network are fine, but it would be much more useful to understand the dynamics of network value over time. Also important would be linking Metcalfe’s Law to
Moore’s Law and showing how that potent combination underlies what WIRED’s Editor-in-Chief Chris Anderson calls The Long Tail.
Moore’s Law and showing how that potent combination underlies what WIRED’s Editor-in-Chief Chris Anderson calls The Long Tail.
Metcalfe’s Law points to a critical mass of connectivity after which the benefits of a network grow larger than its costs. The number of users at which this critical mass is achieved can be calculated by solving C*N=A*N^2, where C is the cost per connection and A is the value per connection. The N at which critical mass is achieved is N=C/A. It is not much of a surprise that the lower the cost per connection, C, the lower the critical mass number of users, N. And the higher the value per connection, A, the lower the critical mass number of users, N.
Continuing to paint with a broad brush, I take
Moore’s Law to mean that my law’s connectivity cost C — the cost of the computing and communication used to create connectivity — is halved every two years. Combining Moore’s and Metcalfe’s Laws, therefore, the number of users at which a network’s value exceeds its cost halves every two years. And that’s just considering C.
Moore’s Law to mean that my law’s connectivity cost C — the cost of the computing and communication used to create connectivity — is halved every two years. Combining Moore’s and Metcalfe’s Laws, therefore, the number of users at which a network’s value exceeds its cost halves every two years. And that’s just considering C.
I am reminded that the first Ethernet card I sold at 3Com in 1980 went for $5,000. By 1982, the cost was down to $1,000. Today, Ethernet connections cost under $100, perhaps as low as $5 per connection. Whatever the critical mass sizes of Ethernets were in 1980, they are a lot lower now.
But that’s not all. The denominator of C/A, the constant of value proportionality, A, has been going up. In the 1980s, Ethernet connectivity allowed users only to share printers, share disks, and exchange emails — a very low A indeed. But today, Internet connectivity brings users the World Wide Web, Amazon, eBay, Google, iTunes, blogs, … and social networking. The Internet’s value per connection, A, is a lot higher now, which means the critical mass size of the Internet, C/A, is a lot lower now, and for two reasons: cost and value.
Amazon connectivity among people and books allows my five-year-old book, INTERNET COLLAPSES, to be available still, with Amazon rank below 1,000,000. There’s eBay connectivity among people with ever more arcane things to buy and sell. There’s blogosphere connectivity among many more writers each with many fewer readers. Daily newspaper circulations have been going down since 1984, and there are now millions of active blogs, most of them very small. Blogs are an early form of social networking among growing numbers of smaller groups along ever more refined dimensions of affinity.
Social networks form around what might be called affinities. For each affinity, there is a critical mass size given by N=C/A, as above. If the number of people sharing an affinity is above this critical mass, then their social network may form, otherwise not. As Internet access gets cheaper and the tools for exploiting affinities get better, many more social networks will become viable.
Somebody should look at this. Somebody already has: Chris Anderson.
Moore’s and Metcalfe’s Laws bring us to Chris Anderson’s new book, THE LONG TAIL, which you should read immediately. (Actually, there’s no rush.
Anderson’s book currently has double-digit rank at Amazon, like my book did five years ago. Take your time and you might even get THE LONG TAIL for next to nothing as it moves down Amazon’s Long Tail.)
Anderson’s book currently has double-digit rank at Amazon, like my book did five years ago. Take your time and you might even get THE LONG TAIL for next to nothing as it moves down Amazon’s Long Tail.)
Anderson‘s Long Tail explains how, for example, more people are listening to music other than the Top 40 hits. Thanks to iTunes, even though there still is a Top 40, the fraction of music listening from down music’s Long Tail is increasing. It remains to be seen whether the growth of music’s Long Tail increases total music sales, which would be my guess, or whether it shifts revenues away from Britney Spears.
For another example of The Long Tail, millions of books like mine, which would otherwise be out of print, can still be found at Amazon.com and delivered in a day or two. Try buying INTERNET COLLAPSES, used if you must.
Let me leave as an exercise for the reader to develop the formulas for how Amazon’s Long Tail grows to the right as the combination of Moore’s and Metcalfe’s Laws biennially halves the critical-mass size of book audiences. Book buying generally shrinks with time, but I’m guessing that Amazon’s per book critical masses, its N=C/As, have been shrinking faster.
Similar formulas could quantify how Moore’s and Metcalfe’s Laws have also driven down the critical mass sizes (N=C/A) of Internet-enabled social networks and extended their Long Tail to the right. Looking more closely, I see that Metcalfe’s Law recurses. Just being on the Internet has some increasing value that may be described by my law. But then there’s the value of being in a particular social network through the Internet. It’s V~N^2 all over again. Down a level, N is now the number of people in a particular social network, which has its own C, A, V, and critical mass N.
Of course the cost (C*N) of getting connected in a social network has been going down thanks to the proliferation of the Internet and its decreasing price.The value (A*N^2) of particular social networks has been growing with broadband and mobile Internet access. Emerging software tools expedite the viral growth and ease of communication among network members, also boosting the value of underlying connectivity.
So, if you want to spend time on V~N^2, and I hope you do, then forget minor refinements like V~N*log(N) and help inflate the next Internet Bubble by figuring out how Metcalfe’s Law recurses down The Long Tail of social networking.
Bob Metcalfe received the National Medal of Technology from President Bush in 2005 for his leadership in the invention, standardization, and commercialization of Ethernet. Bob is a general partner of Polaris Venture Partners, where he serves on the boards of Polaris-back companies including Ember, GreenFuel, Infinite Power Solutions, Mintera, Narad, Paratek Microwave, and SiCortex.
miércoles, 10 de julio de 2013
Ley de Metcalfe: Más incompresión que equivocación
3:30
Juan MC Larrosa
No comments
Metcalfe’s Law: more misunderstood than wrong?
The industry is at it again–trying to figure out what to make of Metcalfe’s Law. This time it’s IEEE Spectrum with a controversially titled “Metcalfe’s Law is Wrong”. The main thrust of the argument is that the value of a network grows O(nlogn) as opposed to O(n2). Unfortunately, the authors’ O(nlogn) suggeston is no more accurate or insightful than the original proposal.
There are three issues to consider:
- The difference between what Bob Metcalfe claimed and what ended up becoming Metcalfe’s Law
- The units of measurement
- What happens with large networks
The typical statement of the law is “the value of a network increases proportionately with the square of the number of its users.” That’s what you’ll find at the Wikipedia link above. It happens to not be what Bob Metcalfe claimed in the first place. These days I work with Bob at Polaris Venture Partners. I have seen a copy of the original (circa 1980) transparency that Bob created to communicate his idea. IEEE Spectrum has a good reproduction, shown here.
The unit of measurement along the X-axis is “compatibly communicating devices”, not users. The credit for the “users” formulation goes to George Gilder who wrote about Metcalfe’s Law in Forbes ASAP on September 13, 1993. However, Gilder’s article talks about machines and not users. Anyway, both the “users” and “machines” formulations miss the subtlety imposed by the “compatibly communicating” qualifier, which is the key to understanding the concept.
Bob, who invented Ethernet, was addressing small LANs where machines are visible to one another and share services such as discovery, email, etc. He recalls that his goal was to have companies install networks with at least three nodes. Now, that’s a far cry from the Internet, which is huge, where most machines cannot see one another and/or have nothing to communicate about… So, if you’re talking about a smallish network where indeed nodes are “compatibly communicating”, I’d argue that the original suggestion holds pretty well.
The authors of the IEEE article take the “users” formulation and suggest that the value of a network should grow on the order of O(nlogn) as opposed to O(n2). Are they correct? It depends. Is their proposal a meaningful improvement on the original idea? No.
To justify the logn factor, the authors apply Zipf’s Law to large networks. Again, the issue I have is with the unit of measurement. Zipf’s Law applies to homogeneous populations (the original research was on natural language). You can apply it to books, movies and songs. It’s meaningless to apply it to the population of books, movies and songs put together or, for that matter, to the Internet, which is perhaps the most heterogeneous collection of nodes, people, communities, interests, etc. one can point to. For the same reason, you cannot apply it to MySpace, which is a group of sub-communities hosted on the same online community infrastructure (OCI), or to the Cingular / AT&T Wireless merger.
The main point of Metcalfe’s Law is that the value of networks exhibits super-linear growth. If you measure the size of networks in users, the value definitely does not grow O(n2) but I’m not sure O(nlogn) is a significantly better approximation, especially for large networks. A better approximation of value would be something along the lines of O(SumC(O(mclogmc))), where C is the set of homogeneous sub-networks/communities andmc is the size of the particular sub-community/network. Since the same user can be a member of multiple social networks, and since |C| is a function of N (there are more communities in larger networks), it’s not clear what the total value will end up being. That’s a Long Tail argument if you want one…
Very large networks pose a further problem. Size introduces friction and complicates connectivity, discovery, identity management, trust provisioning, etc. Does this mean that at some point the value of a network starts going down (as another good illustration from the IEEE article shows)? It depends on infrastructure. Clients and servers play different roles in networks. (For more on this in the context of Metcalfe’s Law, see Integration is the Killer App, an article I wrote for XML Journal in 2003, having spent less time thinking about the problem ). P2P sharing, search engines and portals, anti-spam tools and federated identity management schemes are just but a few examples of the myriad of technologies that have all come about to address scaling problems on the Internet. MySpace and LinkedIn have very different rules of engagement and policing schemes. These communities will grow and increase in value very differently. That’s another argument for the value of a network aggregating across a myriad of sub-networks.
Bottom line, the article attacks Metcalfe’s Law but fails to propose a meaningful alternative.
High Contrast
High Contrast