DSLreports Speed Test is the best

It measures “bufferbloat”, which tests latency during peak bandwidth usage.

My comcast connection gets a D or F rating :)

https://www.dslreports.com/speedtest

Advertisements

One Sure-Fire Way to Improve Your Coding

https://changelog.com/posts/one-sure-fire-way-to-improve-your-coding

The most obvious way to improve your coding is to write more code. Everybody knows that. However, another activity which I guarantee will improve your coding is the complete opposite of writing. I will state this as plainly as I possibly can:

If you want to dramatically increase your programming skills you need to be reading other people’s code.

You Are Not Paid to Write Code

http://bravenewgeek.com/you-are-not-paid-to-write-code/

Every time you write code or introduce third-party services, you are introducing the possibility of failure into your system.
I think the same can safely be said of code—more code, more problems.

almost anything is easier to get into than out of. When we introduce new systems, new tools, new lines of code, we’re with them for the long haul. It’s like a baby that doesn’t grow up.

We’re not paid to write code, we’re paid to add value (or reduce cost) to the business. Yet I often see people measuring their worth in code, in systems, in tools—all of the output that’s easy to measure.

the siloing of responsibilities. Product, Platform, Infrastructure, Operations, DevOps, QA—whatever the silos, it’s created a sort of responsibility lethargy. “I’m paid to write software, not tests” or “I’m paid to write features, not deploy and monitor them.” Things of that nature.

I think this is only addressed by stewarding a strong engineering culture and instilling the right values and expectations.

10 Modern Software Over-Engineering Mistakes

View story at Medium.com

if we plan for 100 things, Business will always come up with the 101st thing we never thought of…we have no clue what’s headed our way.

In my 15 year involvement with coding, I have never seen a single business “converge” on requirements. They only diverge. Its simply the nature of business and its not the business people’s fault.

  • Prefer Isolating Actions than Combining
  • Duplication is better than the wrong abstraction
  • Wrappers are an exception, not the norm. Don’t wrap good libraries for the sake of wrapping
  • Blindly applying Quality concepts (like changing all variables to “private final”, writing an interface for all classes, etc) is NOT going to make code magically better.
  • In House “Inventions” – It feels cool in the beginning. But these are most common sources of Legacy in few years… There is a constant effort required to keeping this going. Even a tiny open source library takes a lot of time to maintain.
  • Refactoring is part of each and every story. No code is untouchable
  • Quality needs time and not just skill. And smart developers frequently overestimate their capability. Finally they end up taking ugly hacks to finish stuff on a self-committed suicide timeline. TL;DR — Bad Estimation destroys Quality even before a single line of code is written

must protect the technology team from becoming a pure execution arm

I hate environments where the technology department is simply implementing what was envisioned by others. It might as well be an outsourced development team because it becomes independent of the decision making process. As Camille Fournier puts it in ‘On the role of CTO,’ “The CTO must protect the technology team from becoming a pure execution arm for ideas without tending to its own needs and its own ideas.”

https://juokaz.com/blog/becoming-a-cto

Working Software

“there is nothing like a tested, integrated system for bringing a forceful dose of reality into any project. Documents can hide all sorts of flaws. Untested code can hide plenty of flaws. But when people actually sit in front of a system and work with it, then flaws become truly apparent: both in terms of bugs and in terms of misunderstood requirements.”

http://www.martinfowler.com/articles/newMethodology.html

John Carmack on Inlined Code

http://number-none.com/blow/blog/programming/2014/09/26/carmack-on-inlined-code.html

The way we have traditionally measured performance and optimized our games encouraged a lot of conditional operations – recognizing that a particular operation doesn’t need to be done in some subset of the operating states, and skipping it. This gives better demo timing numbers, but a huge amount of bugs are generated because skipping the expensive operation also usually skips some other state updating that turns out to be needed elsewhere.

We definitely still have tasks that are performance intensive enough to need optimization, but the style gets applied as a matter of course in many cases where a performance benefit is negligible, but we still eat the bugs. Now that we are firmly decided on a 60hz game, worst case performance is more important than average case performance, so highly variable performance should be looked down on even more.

It is very easy for frames of operational latency to creep in when operations are done deeply nested in various subsystems, and things evolve over time. This can lay hidden as a barely perceptible drop in input quality, or it can be blatantly obvious as a model trailing an attachment point during movement. If everything is just run out in a 2000 line function, it is obvious which part happens first, and you can be quite sure that the later section will get executed before the frame is rendered.

Besides awareness of the actual code being executed, inlining functions also has the benefit of not making it possible to call the function from other places. That sounds ridiculous, but there is a point to it. As a codebase grows over years of use, there will be lots of opportunities to take a shortcut and just call a function that does only the work you think needs to be done. There might be a FullUpdate() function that calls PartialUpdateA(), and PartialUpdateB(), but in some particular case you may realize (or think) that you only need to do PartialUpdateB(), and you are being efficient by avoiding the other work. Lots and lots of bugs stem from this. Most bugs are a result of the execution state not being exactly what you think it is.

In almost all cases, code duplication is a greater evil than whatever second order problems arise from functions being called in different circumstances, so I would rarely advocate duplicating code to avoid a function, but in a lot of cases you can still avoid the function by flagging an operation to be performed at the properly controlled time.

To sum up:

If a function is only called from a single place, consider inlining it.

If a function is called from multiple places, see if it is possible to arrange for the work to be done in a single place, perhaps with flags, and inline that.

If there are multiple versions of a function, consider making a single function with more, possibly defaulted, parameters.

If the work is close to purely functional, with few references to global state, try to make it completely functional.

Try to use const on both parameters and functions when the function really must be used in multiple places.

Minimize control flow complexity and “area under ifs”, favoring consistent execution paths and times over “optimally” avoiding unnecessary work.