Hacker Newsnew | past | comments | ask | show | jobs | submit | sepranu's commentslogin

The commitment point is very true, and extends to employees and clients if you're in certain spaces and want to maintain a clean record in civil or moral court.

The rest of these arguments seem like bad framing. From a certain school of thought, entire point of a startup is to leverage your risk taking ability relative to incumbents in the market and your economic peers. Working at a startup forces you to learn more, faster; and you're challenged by real market factors, not artificial incentive structures created by a large org or academic institution. You also wear a ton of hats that you have no access to early in your career at larger orgs or in academia. Hiring, management, sales, accounting, taxes, finance, product ideation and refinement, etc - early years at FAANG or medium sized orgs will not expose you to all of these. Even an experimental team that lets you play with cool technological toys.

"Not being at school" is an silly way to describe, "4 years of school is sometimes a poor choice of use of the best risk-taking opportunities of a person's life".

"Your VC is not the one at risk here" is a reason to avoid VC, subvert some of the incentives that VCs give you, or ignore a subset of the advice they they give you, not a reason to avoid risky ventures.

It's important to note that the author here is an 18 year old and has no significant real-life perspective on either running a company or being part of a larger org. Nothing about that affects their ability to accurately pontificate on the pros and cons of running a startup, but there also isn't a lot of skin in the game or experience to back up that perspective. The bayesian prior here is negative.

If you're partway through a uni degree or similar learning program, or an incumbent engineer at a small or large org, then you should look at starting or joining an early-stage startup as a great way to take risks that you will be less able to every year (because of increasing costs of living, commitments to a new generation of your family if you marry and have children, incentives to purchase real estate, etc). The payoff will hopefully be huge and will be distributed across potential exits, experience, and personal growth.


No offense, but this doesn’t read as a situation you should feel vindicated about. Engineering management is a complicated beast that the average junior dev doesn’t see all the dimensions, let alone the nuances of. Town hall meeting or not, the “CEO of a conglomerate” probably doesn’t want an admittedly inarticulate opinion on what project management style not to use, especially without a clear alternative being suggested.


I don't, and my coworkers laughed at me when I relayed that conversation to them, I feel vindicated the issue around requirements gathering, and the effect that has on development, isn't unique to my company.


The points the author makes about gradient descent are accurate, in a sense. However, they oversimplify the technique (as it is currently applied today) and the context in which it is used. It seems as if the author, like many others, has a basic understanding of the subject's basic mechanisms, but not the context in which experts understand them.

The example the author cites regarding evo algorithms learning physical laws is laughable - "It's just not in the data - it has to be invented" applies equally to both the backprop and the evolutionary learning algorithms.

"In this case, the representation (mathematical expressions represented as trees) is distinctly non-differentiable, so could not even in principle be learned through gradient descent."

This is incorrect, almost like saying NLP data is not differentiable. For instance, set this representation up as the output of a network (or, if you wanted to be fancier, the central component of an autoencoder), and see how well it predicts/correlates with the experimental data. This is the error, which is back-propagated through the network's nodes.

FWIW, many theoreticians believe that the unreasonable effectiveness of neural networks and especially transfer learning is a result of their well-suitedness to encode laws of physics and Euclidean geometry. The author's final points about a nine-year-old survey may be out of date w.r.t. contemporary neural networks, which often have spookily good local minima and do not behave in the way intuition about gradient descent might suggest.


This comment is both factually inaccurate (as other posters have pointed out re: the toilet sensor) and completely misguided. Nobody should need to trust an opaque algorithm running on complex hardware to ensure their own privacy. Even if your beliefs about what Google or another owner would do with this hardware are correct (which I seriously doubt they are, it's far more likely the data will be used as feed-in for machine learning), there is little to no assurance that the police or other parts of the state would not compel surveillance using these devices, nor that malicious actors would not use them for their own ends.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: