I’ve lived in Boulder for 15 years after living in Boston for a dozen. While I’ve spent a lot of time in Silicon Valley — both as an angel and venture capital investor — I’ve never lived there. While the firm I’m a partner in — Foundry Group — invests all over the United States, I regularly hear statements like, “The only place to start a tech company is in Silicon Valley.”
When David Cohen (CEO of TechStars) and I co-founded TechStars in Boulder, Colo., in 2006, we had two goals in mind. The first was to energize the early stage software/Internet entrepreneurial community in Boulder. The second was to get new first-time entrepreneurs involved more deeply in the Boulder entrepreneurial community. Four years later, we feel like we really understand how entrepreneurial communities grow and evolve.
First is the recognition that Silicon Valley is a special place. It’s futile to try to be the next Silicon Valley. Instead, recognize that Silicon Valley has strengths and weaknesses. Learn from the strengths and incorporate the ones that fit with your community while trying to avoid the weaknesses. Leverage the natural resources of your community and be the best, unique entrepreneurial community that you can be. Basically, play to your strengths.
Next, get ready for a 20-year journey. Most entrepreneurial communities ramp up over a three- to five-year period and then stall or collapse, with the early leaders getting bored, moving away, getting rich and changing their priorities, or just disengaging. It takes a core group of leaders — at least half a dozen — to commit to provide leadership over at least 20 years.
But these two things — playing to the strengths of your community and going on a 20-year journey — are table stakes. Without them, you won’t get anywhere, but you need more. In Boulder, we’ve figured out two critical things for creating a sustainable entrepreneurial community.
Saturday, October 30, 2010
Startup community: It takes a core group of leaders.. to commit to provide leadership over at least 20 years
Friday, October 29, 2010
I’ve been looking at what on earth is going on at Amazon – actually a massively large organization these days, with so much software that has to scale so radically that they simply could not adopt ANY conventional wisdom. They had the massive front end code base, the massive back end code base, and every holiday season things nearly fell off the cliff. The simply could not scale large enough, fast enough, and every attempt to scale brought decreasing increments of value relative to the additional cost. So Amazon changed its architecture. There is no such thing, really, as a database anymore; there are only services. You want data, you want something done, you ask a service to do it.
In order for this to work (=scale) services had to be small and independent. In fact CTO Werner Vogels says that anything which needs agreement will eventually fail at scale. Thus services run locally. There is no central control to fail (much like the Internet). And guess what. He found that each service could have its own team. This team does it all – customer interaction, deciding what to develop, development, deployment, operations, support. I mean everything. No handoffs. And the size of each team is no more people than can be fed with 2 pizzas.
Now there are many, many 2-pizzas teams, each completely owning a service, cradle to grave. But note – there are no standards for configuration management, IDE’s, languages, nothing. Teams can do whatever they want, although some supported tools are easier to use, so they are more common. The idea that anything which needs agreement will eventually fail at scale applies to people as well as software.
The not-so-amazing thing is that this works. Amazingly well. So Pete, while I agree that there is a size beyond which you have to have to have standards outside the team, I also see that there is an even bigger size beyond which you simply no longer have that luxury. Anything which needs agreement will eventually fail at scale.
3. If I'm seriously expected to think of this as a netbook, I really resent that it costs almost $1000 more than the last netbook I bought.
3a. That netbook cost a lot less, but did a lot more. Longer battery life. Larger disk capacity. Three USB's not two. Ethernet. SD card slot. I know why we need all those things. All of them. Why does Apple think it can ship so much less and charge so much more and not force product comparisons? They're depending on the Reality Distortion Field keeping from people from being even slightly pragmatic in reviewing this product.
Most entrepreneurs, when asked, will tell you that hiring the “right people” is one of the most important things they do for their companies. However, what many entrepreneurs won’t tell you is that despite their best efforts, they suck at picking the best people during the recruitment process. I definitely fall into this camp.
This doesn’t just apply to hiring in the management ranks and technical staff, it applies to everyone. During most of the years I’ve run startups, I’ve always considered myself pretty good at detecting startup talent. But, the empirical data suggests that I’m almost as likely to screw it up completely as I am to get it right. Over time, as a startup founder, you learn not to rely on all the conventional proxies for trying to predict the probability of success of any given hire. Things like interviews (however intense), tests, grades, top universities, etc. are all only somewhat effective in raising your odds of making the right decision. After all is said and done, you’re likely to screw it up more often than you realize – or are likely willing to admit. And, the problem is not just limited to you – others on the team are not that much better at it.
Thursday, October 28, 2010
As of December 2007, I have the freedom to work on any project I want for the rest of my life while simultaneously providing for my family, never again worrying about bills, debt, having a place to sleep, or sending our daughter to any college she wants.
I can stay home with my wife and new baby girl for as long as I want, having all the precious time and experiences and memories that they say money can’t buy.
But, in the sense of securing that freedom, it can.
And by crossing the line, I did.
16. Find yourself a “sherpa.” This is someone who has done it before — raised money, done deals, worked with startups. Give this person 1 to 2% of your company in exchange for their time. Rely on them to open doors to future investors. Use them as a sounding board for corporate development issues. Don’t do this by committee. Advisory boards never amount to much. Find one person, make them your sherpa, and lean on them.
Sunday, October 24, 2010
Forming a startup is much less risky when you are younger. After mentioning that I was in high school, almost every person I talked to between sessions said it’s a great idea to start young. Now that I think of it, in some situations there is zero risk involved in starting a startup, in which case it becomes purely a learning experience. This is what they told me: starting young will help you gain a better sense of what type of team is needed for a successful startup in the future or the inevitable ininsanity involved in starting those one-person startups. Additionally, a number of speakers stressed the importance of doing things NOW: Don’t wait if you have an idea. Quit now if you are failing and don’t be afraid to fail.
Get involved at a startup. Almost all of the venture capitalists and angel investors that talked on stage said the best way to approach them is through someone they already know. It almost seems to be a reoccurring pattern that some of the most successful startups today came from people with experience working at other startups. This made me really realize the importance of getting out there and really networking with people in the startup scene. Keeping this in mind, It’s probably far more worthwhile to intern at a startup than an established company.
This has turned out to be the most important quality in startup founders. We thought when we started Y Combinator that the most important quality would be intelligence. That's the myth in the Valley. And certainly you don't want founders to be stupid. But as long as you're over a certain threshold of intelligence, what matters most is determination. You're going to hit a lot of obstacles. You can't be the sort of person who gets demoralized easily.
Bill Clerico and Rich Aberman of WePay are a good example. They're doing a finance startup, which means endless negotiations with big, bureaucratic companies. When you're starting a startup that depends on deals with big companies to exist, it often feels like they're trying to ignore you out of existence. But when Bill Clerico starts calling you, you may as well do what he asks, because he is not going away.
Saturday, October 23, 2010
People who think that a book—even R.L. Stine's grossest masterpiece—can compete with the powerful stimulation of an electronic screen are kidding themselves. But on the level playing field of a quiet den or bedroom, a good book like "Treasure Island" will hold a boy's attention quite as well as "Zombie Butts from Uranus." Who knows—a boy deprived of electronic stimulation might even become desperate enough to read Jane Austen.
Most importantly, a boy raised on great literature is more likely to grow up to think, to speak, and to write like a civilized man. Whom would you prefer to have shaped the boyhood imagination of your daughter's husband—Raymond Bean or Robert Louis Stevenson?
Scott Adams Blog: with the advent of touchscreen devices we are being transformed from producers into consumers.
Another interesting phenomenon of the iPhone and iPad era is that we are being transformed from producers of content into consumers. With my BlackBerry, I probably created as much data as I consumed. It was easy to thumb-type long explanations, directions, and even jokes and observations. With my iPhone, I try to avoid creating any message that are over one sentence long. But I use the iPhone browser to consume information a hundred times more than I did with the BlackBerry. I wonder if this will change people over time, in some subtle way that isn't predictable. What happens when people become trained to think of information and entertainment as something they receive and not something they create? I think this could be a fork in the road for human evolution. Perhaps in a million years, humans will feel no conversational obligation to entertain or provide useful information. That will be the function of the Internet. Someday a scientist will identify the introduction of the iPhone as the point where evolution began to remove conversation from the list of human capabilities. And when the scientist forms this realization, he won't tell his spouse because conversation won't exist. He'll put it on the Internet.
Touchscreens are great for passively browsing, as Scott Adams noted:
Another interesting phenomenon of the iPhone and iPad era is that we are being transformed from producers of content into consumers. With my BlackBerry, I probably created as much data as I consumed. It was easy to thumb-type long explanations, directions, and even jokes and observations. With my iPhone, I try to avoid creating any message that are over one sentence long. But I use the iPhone browser to consume information a hundred times more than I did with the BlackBerry. I wonder if this will change people over time, in some subtle way that isn't predictable. What happens when people become trained to think of information and entertainment as something they receive and not something they create?
Because we run an entire network of websites devoted to learning by typing words on a page, it's difficult for me to get past this.
Friday, October 22, 2010
Thursday, October 21, 2010
Test-First Teaching provides a fundamental shift in the way people learn software development. Initially, it helps the student focus on learning very basic syntax, able to independently confirm when they have successfully completed an exercise. That immediate feedback is valuable for cementing knowledge.
Test-first teaching also teaches an understanding of all of the arcane error messages in a low stress situation. The first thing you see, before you have written a line of code, is an error. Then you discover what you need to do to fix that error. Test-first teaching helps people intuitively understand that mistakes are a natural part of the software development process.
In traditional programming exercises, you are either given a fairly large task and asked to implement the whole thing, or you are provided with "skeleton code" -- source code that has been eviscerated to remove key sections, which you are asked to fill in.
"Large task" exercises are often challenging to students because of their sheer size. Many lines of code need to be written before you receive any positive reinforcement. This can be frustrating to beginners, and boring for advanced students.
"Skeleton code" exercises are also frustrating. The task of the student should be to figure out how to write code that will accomplish the given task. With skeleton code, you are first presented with the task of figuring out what the original author was trying to do; of reading through the code (often littered with idiosyncratic idioms and obscure comments); and then of trying to implement just one part of the algorithm, without necessarily understanding the larger picture. If the fill-in-the-blank code section is too complicated, the student may never complete the assignment; if it's too simple, no learning may be gained by the exercise.
Finally, in both types of traditional exercises, as a student you don't really know when you are finished! Sometimes, you will succeed in the task, but neglect to print the results, and will keep at it, believing you are still missing something; other times, you might write code that seems to work but is crucially flawed in some way or another. This is one of the most powerful features of test-first development -- you code until the test passes, and then you stop coding. The test provides a map, informing you of where to begin, and where to end.
Test-first teaching is appropriate for both guided and solo use. Students in a classroom may rely on classmates or teachers for guidance; but if alone, the tests provide some measure of feedback and guidance (although unit tests can never actually debug and fix the code).
Perhaps the most important aspect of test-first teaching is that it teaches the whole process, from opening a new file in a text editor to compiling and running. At the end of the day, the students can say, "At least I know how to write a program." Many exercises, especially skeletons but also those based on tools and toy problems, end up skipping the fundamentals that are vital not just for coding on a day-to-day basis, but also for cementing the higher-level concepts into habits and skills.
Wednesday, October 20, 2010
Twitter and Facebook based activism gathers less motivated participants, compared to an offline, harder to participate version.
But how did the campaign get so many people to sign up? By not asking too much of them. That’s the only way you can get someone you don’t really know to do something on your behalf. You can get thousands of people to sign up for a donor registry, because doing so is pretty easy. You have to send in a cheek swab and—in the highly unlikely event that your bone marrow is a good match for someone in need—spend a few hours at the hospital. Donating bone marrow isn’t a trivial matter. But it doesn’t involve financial or personal risk; it doesn’t mean spending a summer being chased by armed men in pickup trucks. It doesn’t require that you confront socially entrenched norms and practices. In fact, it’s the kind of commitment that will bring only social acknowledgment and praise.
The evangelists of social media don’t understand this distinction; they seem to believe that a Facebook friend is the same as a real friend and that signing up for a donor registry in Silicon Valley today is activism in the same sense as sitting at a segregated lunch counter in Greensboro in 1960. “Social networks are particularly effective at increasing motivation,” Aaker and Smith write. But that’s not true. Social networks are effective at increasing participation—by lessening the level of motivation that participation requires. The Facebook page of the Save Darfur Coalition has 1,282,339 members, who have donated an average of nine cents apiece. The next biggest Darfur charity on Facebook has 22,073 members, who have donated an average of thirty-five cents. Help Save Darfur has 2,797 members, who have given, on average, fifteen cents. A spokesperson for the Save Darfur Coalition told Newsweek, “We wouldn’t necessarily gauge someone’s value to the advocacy movement based on what they’ve given. This is a powerful mechanism to engage this critical population. They inform their community, attend events, volunteer. It’s not something you can measure by looking at a ledger.” In other words, Facebook activism succeeds not by motivating people to make a real sacrifice but by motivating them to do the things that people do when they are not motivated enough to make a real sacrifice. We are a long way from the lunch counters of Greensboro.
It actually makes sense. If the requirement to participate is just to click the 'Like' button, then i think a lot more are willing to participate, compared to wearing a black shirt and march downtown under the heat at noon time. Yes, so this can mean a lot more (way more) participants. However, the quality of these participants and their motivations to do so, might be less compared to the ones who really made the decision to intentionally endure the heat to join the cause. That, if we really need a real revolution, ones that requires people to die (think Che Guevarra and Fidel Castro), it's questionable how effective Twitter and Facebook will be.
Ok, this can probably work in a situation where there's a larger mass, with weaker ties showing both weak support, boosting morale and motivation of the ones with stronger ties who are willing to show strong support. Often, this is already enough to fuel the revolution. You can imagine, that some will take up arms and go into the mountains to fight an unending rebellion, and then the supporters just stay on with their lives, but still supporting them anyway they can from their standpoint.
Tuesday, October 19, 2010
Many times, when working with Git, you may want to revise your commit history for some reason. One of the great things about Git is that it allows you to make decisions at the last possible moment. You can decide what files go into which commits right before you commit with the staging area, you can decide that you didn’t mean to be working on something yet with the stash command, and you can rewrite commits that already happened so they look like they happened in a different way. This can involve changing the order of the commits, changing messages or modifying files in a commit, squashing together or splitting apart commits, or removing commits entirely — all before you share your work with others.
In this section, you’ll cover how to accomplish these very useful tasks so that you can make your commit history look the way you want before you share it with others.
Monday, October 18, 2010
After visiting Okinawa, Japan, and meeting with global experts on innovation, I’ve come to the conclusion that Silicon Valley’s greatest advantage isn’t its diversity; it is the fact that it accepts and glorifies failure. Like many other countries, Japan has tried replicating Silicon Valley. It built fancy tech parks, provided subsidies for R&D, and even created a magnificent new research university. Yet there are few tech startups, and there is little innovation; Japan’s economy is stagnant.
There is a reason for this stagnation.
Sunday, October 17, 2010
Cocoa Development in Emacs
Where the future was made yesterday.
By Mark Dalrymple on March 24, 2002.
Presented here for your amusement are some directions on how I use emacs (rather than Project Builder) as the primary focus for my Cocoa development time. These may be of interest to folks who like to use emacs for everything, or for those folks coming from other unix platforms and don't like using Project Builder to edit source code.
This is my typical workspace
I've moved the motivation for all of this to the rants section at the end. I figured most folks aren't interested in my ego my history and just want to get to the good stuff.
The instructions presented here match my particular development style. I don't mind my build environment having some rough edges. I don't mind having to expend a little extra mental energy when doing my work. I also want to minimize keystrokes when it makes sense. I do want my compile and running and debugging turnarounds to be as fast as possible. (Make Mistakes Faster was the advertising tag line for a classic Mac C compiler) Once my emacs buffers get warmed up, I've gotten my compiling and running down to two keystrokes each (well, three if you count "return". From here on I'm not counting the "return" key. nyah). With everything running in one emacs process, I never have to touch the mouse (or the annoying TiBook trackpad) while I'm reading or editing source code or browsing the system header files. I do use Mail.app for mail reading and iCab to read the Cocoa documentation, so I'm not a complete GUI luddite.
With that said, I presume you're somewhat familiar with emacs, how to open files, how to move around, and how keystrokes are described. For example, M-x goto-line [ret] 321 [ret] is press meta/escape, then x, type goto-line (with tab completion if you so desire), return, 321, return), and that you know what the .emacs startup initialization file is and how to add stuff to it. If you don't know emacs, you might want to run the built-in tutorial. (start emacs. Press escape, then 'x', then type 'help', press return, press 't', then follow along) O'Reilly (of course) has a book on emacs
He (Steve) did not respect large organizations. He felt that they were bureaucratic and ineffective. He would call them "bozos."
I remember going into Steve’s house and he had almost no furniture in it. He just had a picture of Einstein, whom he admired greatly, and he had a Tiffany lamp and a chair and a bed. He just didn’t believe in having lots of things around but he was incredibly careful in what he selected. The same thing was true with Apple. Here’s someone who starts with the user experience, who believes that industrial design shouldn’t be compared to what other people were doing with technology products but it should be compared to people were doing with jewelry… Go back to my lock example, and hinges and a door with beautiful brass, finely machined, mechanical devices. And I think that reflects everything that I have ever seen that Steve has touched.
When I first saw the Macintosh — it was in the process of being created — it was basically just a series of components over what is called a bread board. It wasn’t anything, but Steve had this ability to reach out to find the absolute best, smartest people he felt were out there. He was extremely charismatic and extremely compelling in getting people to join up with him and he got people to believe in his visions even before the products existed. When I met the Mac team, which eventually got to 100 people but the time I met him it was much smaller, the average age was 22.
These were people who had clearly never built a commercial product before but they believed in Steve and they believed in his vision. He was able to work in multiple levels in parallel.
On one level he is working at the “change the world,” the big concept. At the other level he is working down at the details of what it takes to actually build a product and design the software, the hardware, the systems design and eventually the applications, the peripheral products that connect to it.
In each case, he always reached out for the very best people he could find in the field. And he personally did all the recruiting for his team. He never delegated that to anybody else.
The other thing about Steve was that he did not respect large organizations. He felt that they were bureaucratic and ineffective. He would basically call them “bozos.” That was his term for organizations that he didn’t respect.
In this episode, your host Miles Forrest interviews Robert Martin, know by many as "Uncle Bob." Bob has been slinging code for 40 years, and still loves coding. As Bob puts it, "I want to code till I die and I don't want to die soon." Bob reveals his thoughts on the craft of programming and hopes for the next computer language, including the solution to the Moore's Law dilemma that dates back to 1957. He'll describe the right way to write a framework (hint: don't _write_ it) and discuss current problems and opportunities with agile development methods. Discover the *Most Horrible Invention* in the last twenty years (and possibly the most popular!) and what Bob thinks about experience, mentorship and science fiction.´ Created: Oct 13, 2010 Duration: 49:11 Size: 27.9 MB
Saturday, October 16, 2010
Naming a startup is hard. Very hard. On the one hand, the pragmatic entrepreneur thinks: “I shouldn’t be wasting time on this — for every successful company with a great name, there’s one with a crappy name that did just fine. It doesn’t seem like a name has much influence on the outcome at all. I’m going to get back to writing code.” I sort of agree with this. You shouldn’t obssess about your name. But, you also shouldn’t dismiss it as unimportant. Part of the startup game is to try and remove unnecessary friction to your growth. Sure, you could build a spectacularly successful company despite having a lousy name — but why not stack the odds in your favor?
One more reason why spending calories on picking a great name is important: It’s a one-time cost to get a great name — but the benefit is forever. Conversely, if you short-change this and dismiss it completely, you’re going to incur what I’d call “branding debt”. Not bad at first, and maybe not a big deal for you ever, but every year, as you grow, you’ll have this small voice nagging inside your head “should I change the name of the company…”. It’s going to be annoying. And the longer you wait, the more expensive the decision is, and the less likely you are to do it. Save yourself some of that future pain, and invest early in picking a decent name. You may still get it wrong, but at least you’ll know you tried.
Can You Focus Long Enough to Deliver?
We all know the guy who moves from one idea to the next and never finishes anything. He’s freakishly smart, but leaves a trail of half-finished carnage in his wake. Staying focused is a huge part of being successful.
To evaluate your level of focus, look at your history. How many half-finished apps are sitting on your hard drive? How many blogs have you started and abandoned within three weeks? How many have you worked on until they were done?
People who have trouble staying focused on an idea often feel intense passion for it early on, obsess about it for a week or two, and burn themselves out on it by the one-month mark.
If you have trouble with this you may need to give yourself a cooling off period. What often happens is you become so engrossed in the idea that you never stop to look at it rationally and realize a glaring flaw. A flaw that you find 2-3 weeks later when you realize you probably won’t be able to pull it off after all. Things that would have been nice to know 2-3 weeks earlier.
I’ve found that the first several hours (or even days) after coming up with a new idea are filled with irrational, euphoric thoughts of how easily it can be executed and how well the market will receive it. You’ll often hear yourself saying “Why hasn’t anyone thought of this?”
I have a rule that I never spend money on an idea in the first 48 hours. During this time my judgment is clouded by the euphoria of having this amazing new idea. Given that I’ve had several hundred ideas over the past few years, at a minimum I’ve saved myself a few thousand bucks in domain registration fees.
Inspired by a talk I gave yesterday at the BOS conference. This is long, feel free to skip!
My first real job was leading a team that created five massive computer games for the Commodore 64. The games were so big they needed four floppy disks each, and the project was so complex (and the hardware systems so sketchy) that on more than one occasion, smoke started coming out of the drives.
Success was a product that didn't crash, start a fire or lead to a nervous breakdown.
Writing software used to be hard, sort of like erecting a building used to be hundreds of years ago. When you set out to build an audacious building, there were real doubts about whether you might succeed. It was considered a marvel if your building was a little taller and didn't fall down. Now, of course, the hard part of real estate development has nothing to do with whether or not your building is going to collapse.
Thursday, October 14, 2010
With the Ruby on Rails 3 Tutorial screencast series, you'll learn to make real, industrial-strength web applications with Ruby on Rails, the open-source web framework that powers many of the web’s top sites, including Twitter, Hulu, and the Yellow Pages. The screencast series includes 12 individual lessons totaling more than 15 hours, with one lesson for each chapter of the Ruby on Rails 3 Tutorial book. The Rails 3 Tutorial screencasts also contain dozens of tips and tricks to help you go beyond the Rails Tutorial book, including debugging hints, test-driven development techniques, and solutions to many of the book’s exercises. And though the screencasts are carefully edited, I’ve left in some of the problems I encountered along the way, so that you can learn how to recover from the inevitable application errors—and see that even experts sometimes make mistakes.
The Ruby on Rails 3 Tutorial screencasts bring Rails development to life in a way that is difficult to express in print: if the Rails Tutorial book is the musical score, the screencast series is the symphony. To see some examples, you can view an excerpt or download a complete sample lesson. Like the Rails Tutorial PDF, the screencast series is available for purchase as a standalone product, but the best deal is the PDF/screencast bundle; click here to buy it now!
At RailsConf 2009, BJ Clark and I gave a talk about working with legacy Rails apps. In that talk, we spent some time talking about technical debt. Ward Cunningham originally coined the term 18 years ago, and it has enjoyed a resurgence in blog posts and conference talks throughout the past year.
I’ve noticed that most discussions about technical debt, including BJ’s and mine, miss the mark when it comes Ward’s original point.
Ward invented the Debt Metaphor to explain how a software project benefits from delivering software early on in the project’s lifecycle. He was working on WyCash, an investment portfolio system that modeled a complex set of financial instruments. Because Ward and the rest of the programming team were programmers and not financial experts, a good deal of their effort went into learning the complexities of the domain they were working in. This sort of learning is best done iteratively, by creating software and observing how it works, and then taking what you learn and investing it back into the software. Steve Freeman, listening to Ward speak about technical debt, captured this process:
- Use what you know
- Feel it work
- Share the experience
- Wait for insight
- Refactor to include it
The most interesting bit to me is “refactor to include insight.” We tend to interpret refactoring as improving the design of existing code, but if you listen to Ward talk it’s clear that he literally means changing the factorization of the code over time. As his team learned about the problem, they modified the program “to look as if we had known what we were doing all along, and to look as if it had been easy to do.” It was crucial to him that the software reflect the team’s current understanding of the problem, and that it be continually updated to reflect any new insights they learned.
Wednesday, October 13, 2010
Getting smart about the hierarchy of smart: Treat an expert, like an expert; the novice, like a novice.
Don't talk to all your employees, all your users or all your prospects the same way, because they're not the same.
The Dreyfus model of skill acquisition posits that there are five stages people go through:
--wants to be given a manual, told what to do, with no decisions possible
2. Advanced beginner
--needs a bit of freedom, but is unable to quickly describe a hierarchy of which parts are more important than others
--wants the ability to make plans, create routines and choose among activities
--the more freedom you offer, the more you expect, the more you'll get
--writes the manual, doesn't follow it.
If you treat an expert like a novice, you'll fail.
even the ones I like suffer from business people, and even worse, horrible tech recruiters. (it's not only in NYC, it's everywhere)
I work at Meetup, but the opinions expressed in here are my own, and do not necessarily represent the opinions of Meetup.
When the Freehackers Union post came out and it starting getting some attention, I was completely stoked. I couldn't wait to move to NYC (I was moving anyway) and show off my hack and gain acceptance into the group. Of course, by the time I actually got here, the F.U. had dissipated.
I have to be honest, I did think Zed was exaggerating quite a bit when he described his frustrations attending tech events, but as I started getting involved in the NYC tech scene, I too felt the same way.
But, Zed was right—NYC tech groups and events suck. OK, maybe that's a bit harsh, because there are great ones out there, but even the ones I like suffer from business people, and even worse, horrible tech recruiters.
Tuesday, October 12, 2010
Are you working in one of those big companies which have slow processes and endless meetings? Are you told to work on a task to create an API but you don’t really know how it is going to be used? Are you just following the specifications to make it technically correct?
The knowledge of “How to do things” is the most valuable piece of information developers brings to teams. Sadly some people think that it is the only thing developer needs to know to pick him to the project.
We don’t think so.
Monday, October 11, 2010
And i worked my way though the remaining JigSaw pieces at the table. One of the aspects is to let the patient join your team, but not only the patient him or herself, but also the family and the informal care, for different kind of reasons. Also if your serious about this, you also have to cope with different kind of communication-models and have to invest in two-way communication also by the internet ánd have to be seriously in give the patient acces to his/her OWN data.
Saturday, October 9, 2010
The term 'Continuous Integration' originated with the Extreme Programming development process, as one of its original twelve practices. When I started at ThoughtWorks, as a consultant, I encouraged the project I was working with to use the technique. Matthew Foemmel turned my vague exhortations into solid action and we saw the project go from rare and complex integrations to the non-event I described. Matthew and I wrote up our experience in the original version of this paper, which has been one of the most popular papers on my site.
* Start by picking a name, an identity, it can be your online username/nickname or your full name if you are working alone; or a name of a team, if you are working with a team. Or, you can pick a name for a team or a startup team, even if you are just working by yourself, if you eventually plan to recruit teammates. Whichever you choose, you must pick a name. This is important. There's a lot of ways to pick a name, pick one that you think fits best to whatever you are trying to sell to your clients. You can pick 'The 3 little pigs' if you feel that, and it actually sounds unique for a startup company, and is a good conversation starter. People will want to know why you named it after pigs.
So make sure to get a name. I suggest at a minimum make a name of yourself. Focus on promoting yourself, and slowly build that startup identity. Note that you are the only sole owner of your identity, that even if you join an existing startup or make one from scratch, you want people to identify you as you, and as someone who's involved on these and so startups. And you want to keep that identity around you, and not have it replace you. Startups come and go, as well as business. You, there's only one you. You have to maintain it like it's a startup for 100 years (unless you get to 100, or less depending on how many liempo you eat a day). If you don't like this kind of attention, i suggest you get a job. I don't like to make a 'get a job' so negative, but it is a lot of freelancers and startup-ers like to avoid. I found that people who want to build startups, don't like a job. Don't want to work. They just want to play computer games (now, this is a topic worthy of another post later).
Friday, October 8, 2010
Yes, it's supposed to be Ruby, but this code don't look like Ruby to me. It's still Objective-C, minus the curly braces.
app = NSApplication.sharedApplication app.delegate = AppDelegate.new window = NSWindow.alloc.initWithContentRect([200, 300, 300, 100], styleMask:NSTitledWindowMask|NSClosableWindowMask|NSMiniaturizableWindowMask, backing:NSBackingStoreBuffered, defer:false) window.title = 'MacRuby: The Definitive Guide' window.level = 3 window.delegate = app.delegate button = NSButton.alloc.initWithFrame([80, 10, 120, 80]) button.bezelStyle = 4 button.title = 'Hello World!' button.target = app.delegate button.action = 'say_hello:'
What's with the .alloc thing? Hehe! :D The bit OR option style. Hash with symbols is what's the 'in thing' in Ruby.
I think the Ruby/Mac boys should join in the development, otherwise, it's still going to be Objective-C/Cocoa on this MacRuby thing.
Mac OS X 10.5 (Leopard) provides a build of MRI (Matz’s Ruby Interpreter), version 1.8.6. This is the current de facto standard for Ruby interpreters; it is stable, well documented, tested, and understood, etc. If you need to run a legacy Ruby script, with a minimum of hassle, the default ruby(1) command is probably the right choice. Similarly, if you have a legacy RubyCocoa application which you simply wish to run, RubyCocoa is certainly the right choice.
However, if you have needs that aren’t well met by these offerings, MacRuby is certainly worthy of your consideration. MacRuby began as an attempt to work around many problems inherent in RubyCocoa. In the course of solving these problems, MacRuby has also solved numerous problems in Ruby 1.8. Consequently, there are a number of reasons (e.g., convenience, efficiency, flexibility, performance) why one might wish to use MacRuby for new (and ongoing) Ruby applications:
MacRuby is based on Ruby 1.9, so it is powered by the YARV bytecode interpreter. This greatly reduces the execution time of Ruby programs.
Monday, October 4, 2010
Elon Musk looks like a kid who just walked into a toy factory. The 39-year-old CEO of upstart car company Tesla Motors stands on the main floor of the New United Motor Manufacturing plant and looks with awe from one giant piece of machinery to the next. The car factory, known as Nummi, is located in Fremont, California, but it’s an industrial city unto itself. It encompasses 5.5 million square feet and contains a plastics molding factory, two paint facilities, 1.5 miles of assembly lines, and a 50-megawatt power plant. Since 1984, Toyota and General Motors had run Nummi together, producing as many as 450,000 cars a year here until it was shuttered in April. Now, in a remarkable turn of events, Musk owns the place.
He seems as surprised as anyone at this development. For years, the exuberantly ambitious entrepreneur wasn’t even allowed to visit. Plant managers apparently frowned on the idea of a potential competitor touring the facility. Not that they had much to fear: In 2009, Tesla managed to produce only about 800 high-performance electric sports cars—a niche manufacturer in an industry that churns out millions of vehicles.
The most exciting thing I can learn about anyone boils down to this:
They really, truly give a damn about something.
It’s important to calibrate what I mean about this. Being a stickler about Star Trek trivia, parts of speech or state capitals doesn’t count. Affinity for political knee-jerk doesn’t qualify, either.
Giving a damn is about sacrifice and investment. It’s paying with something precious, in the service of something you really, truly value.
Sunday, October 3, 2010
What is the Bash Shell?
The GNU Bourne-Again SHell (BASH) incorporates features from the C Shell (csh) and the Korn Shell (ksh) and conforms to the POSTIX 2 shell specifications. It provides a Command Line Interface (CLI) for working on *nix systems and is the most common shell used on Linux systems. Useful bash features will be the subject of the rest of this document.