Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Show HN: How to ensure JavaScript code quality (deepscan.io)
83 points by wschoi on Nov 7, 2017 | hide | past | favorite | 51 comments


Our JavaScript at this point is basically an add only nightmare I have no idea how to dig ourselves out of.

It is a bigger problem than I know how to fix. We even have a similar static analysis tool on our code, but when there are thousands and thousands of existing issues pointed out by it no one really cares about adding a few more to the pile.

I tried to promote typescript and rewriting over time. I can’t seem to get our front end developers to understand the value, or care. It’s totally my failure; it hurts my soul.

It’s honest to god something of an existential crisis for me.

I hunkered down in primarily backend over the nightmare that is our JS, whereas I spent nearly ten years full stack. Sigh.


Here is the method I used in a similar situation. Start from an extremely loyal linter config where you get 0 errors and 0 warnings with your existing code (that might mean an empty config, but that's OK). Then add rules one by one, fixing the codebase completely for each rule that you add, so that you always have 0 warnings and 0 errors with each commit. Yes, this is significant amount of work, but at least the work is broken down into chunks, and each chunk has a clear definition of done (can go straight into an agile backlog, etc). By the time you have a sensible linter config, you have a much better codebase as well.


Ditto, this is the same thing I did for a 16,000+ line app.

I used eslint ( https://eslint.org ) and prettier ( https://prettier.io ), and using airbnb's JavaScript Style Guide as a reference point ( https://github.com/airbnb/javascript ), I would enable rules one by one. Then slowly rule by rule (commit by commit) I would clean up the code base.


Our modules currently are at 96398 lines of code (I am not counting the apps). It is still maintainable and perfectly understandable. Everything is split into module though and then every module has its own tests and its own examples. We lack proper documentation. It is not a problem yet but it could turn out to be in the future.


This is exactly where my code base was about 11 months ago. We just got a new hire and he commented on how nice the code was (Should have saw it 11 months ago haha). It all started with linter config, piece by piece it started looking better and more maintainable (also refactoring to es7 features helped). You just have to do it, one small part/ticket at a time.


I have taken a similar approach where I start with having 0 lint issues, and add a git hook or CI step to only allow merges with 0 lint issues.

Except instead of modifying my config over time, I add a /* eslint ignore */ to the top of every existing file.

Every new file will be coded to the new standard, and as I open old files to tweak them I take the opportunity to clean them up.


I feel the same that it can be overwhelming to apply linters to legacy code base. While developing DeepScan, we tried to aggresively filter-out low impact issues and common coding patterns that are not actual problems.


Segmentation is the way to go, your approach is risky since you need to change a lot of different parts of the project.

We segmented by creating a legacy folder with the old standard and a folder for the new code that has strict linting rules.

If you created something new or did substantial changes to something existing it should go into the new folder. That way the code quality improvement became a natural part of our work process, and we did not have to touch parts just to improve code quality. For that serves quite little value by itself.


What do you do when you have an old school application where JavaScript is written in non-modern style? For example, you have a.js and b.js. You define a function "hello" in a.js and use it in b.js. In the project, the developer assumes that the two files will be included in correct order within HTML. Linters fail here because they think b.js uses an undefined function, when it's actually defined. Unfortunately, since most JS linters only validate one file at a time, they are pretty much useless here.


You add 'hello' in the linter's config as a pre-defined global, until you get around to fixing that.

I would not want my linter to load multiple files, since that pattern of polluting the global namespace is really not maintainable.

The fix can be fairly simple if you wrap your entire b.js file in a function that takes an argument of 'hello' which you invoke from a.js.

If that doesn't work because you have too many global declarations you should probably just dump all the code into one giant file.

That might seem terrible and unmaintainable, but it is exactly what you already have.

Once the mess is all in one spot you can use a linter to start improving the quality, and begin breaking of the code into actual modules.


> just dump all the code into one giant file

Sounds too scary.

I am facing the same problem of too many global declarations all thrown into a global soup during build time when all js files are concatenated.

I am looking for a safe and methodological method. By safe I mean a method that a robot can understand. If you leave room for thinking then people start making mistakes.

It's hard enough to get the green light for starting such project. If you start breaking stuff as well, you're going to burn the credit for refactor real quick


This doesn't work on large projects with multiple people involved. Can't go through hundreds of JS files and manually add every function to the linter's config.


Sorry to hear about your frustration. A lot of people can sympathize, but I can assure you that nightmare codebases can happen in statically typed, object-oriented languages just as easily as they can in dynamically typed, functional languages. Not sure if there's one best way to prevent it. Applying the 'unix philosophy' for better modularity can help; structuring a JS codebase from composable modules is an example. Nowadays I try to approach code with the mindset of "what if we wanted to turn this into a standalone, open-source npm package." Even simple things like writing consistently good README's in every repository can help. If you haven't yet, unit tests and linting should be part of your build pipeline, too.


Typescript isn't going to magically fix any issues, it'll just make your types a bit more obvious. Likewise, CoffeeScript, React, or any other tech isn't going to fix your issue.

Javascript is powerful and flexible, and sometimes that's a flaw.

It sounds like you need an architect who can look into modularization and setting some standards to in which teams are accountable. This isn't a perfect process, it usually takes a few iterations to see progress, and it requires a pitch to your business side of the house (usually along the lines of "Hey, we need more time to do maintenance work, but it'll drastically lower bugs seen by end users").


We have other codebases I developed in TypeScript that continue to be far far better places to be. The type safety really adds a lot to prevent you from doing dumb hard to understand things that just “work”. I know it’s not a magic bullet but it’s at least a shinier one.


Right, and if you disabled 50% of the language, you would also have a better codebase, but the trade off is going to be that any development is probably slower.

Programming is about trade offs, and typescript offers you a fixed set of tradeoffs - much more project complexity for type checking ahead of time. That can be valuable in some instances - when one codebase has to be shared across a lot of team members that are unable to communicate well. But typically the better fix is to break down and modularize code so that we smaller, more easily managed pieces that can be handled by just a few hands.


> I can’t seem to get our front end developers to understand the value, or care. It’s totally my failure; it hurts my soul.

Well, if your programmers don't care, there's nothing any tool can do about it.

Time for some people leadership. Get them motivated. Make them care.


If your programmers don't care, is there anything leadership can do about it? If you have seen some management tactic turn people around, please let me know --- especially if its effect is lasting. I don't care about tricks that just electrocute a dead body.

If I were in the same situation, I would start looking elsewhere.


> It is a bigger problem than I know how to fix. We even have a similar static analysis tool on our code, but when there are thousands and thousands of existing issues pointed out by it no one really cares about adding a few more to the pile.

What about just making sure the pile doesn't grow bigger? Can you add something that warns you when your total percentage of erroneous/bad code goes up? This should help to prevent adding new bad code, and rewards refactoring old bad code.


The standard technologies are larger than they used to be, but this isn't the problem that frustrates people.

The JavaScript ecosystem is big because crappy developers are addicted to stupid. Impose hard lines in your office to ban unnecessary abstractions, large frameworks, offload extraneous build steps, and trim off as much as you can. Do this and don't allow stupid back in.

Is that going to scare the crap out of insecure developers. Absolutely. They can get over it (and you can train them) or they can go somewhere else. If you want small manageable code this is the hard choice you have to make.

If you are not willing to make hard decisions (and tell the cry babies to STFU) then don't complain about big code. This trauma is completely self-induced.


"I have found 10,000 errors in your code, you use tabs instead of spaces" - Linter

This doesn't seem to be a linter though, the copy says it will find actual bugs.


Thanks for clearly answering the privacy + security related questions (I specifically went to the site to search for this - way too many software dealing with your code forget to specify this):

"For demo and editor plugins, we store the source content transmitted to the server as a temporary file. Right after the inspection, the file is completely deleted."


That is laudable and refreshing. Since they are being so upfront, though, it would be nice to spell out that no copies or derivitive metadata are kept either.


Thanks for the comment. We will add those facts to documents in the next release, which is planned on the end of this month.


Just tried it out. Smooth sailing! A couple remarks: 1) It would be nice to explain some of the warnings. I had

    myFunction() {
        for (var x in xs) { .. }
        for (var x in xs2) { .. }
    }
it would complain about "Duplicate declaration of variable 'x'". Understandable, but then I'd like to know the javascript context information (apparently x is not in a sub-scope (as in most languages), but in the main function scope). 2) GitHub integration would be nice. A button when looking at source in GitHub which annotates the code with warnings would be quite an improvement over the current view.


Thanks for the helpful comment!

1) We will consider adding explanation like "Note that 'x' is declared in function scope. Consider using 'let' declaration if you intended block scope".

2) Yes. Directly showing warnings in the GitHub site would be a nice feature. We will consider it in our roadmap.


I'm not sure what advantage this has over Flow and the usual JIRA ticket management.


DeepScan runs data-flow analysis that is somewhat orthogonal to the type checking of Flow.

For example, BAD_MIN_MAX_FUNC rule detects an error in the following code, which is beyond type checking.

  x = Math.min(0, Math.max(100, x)); // BAD_MIN_MAX_FUNC alarm. The result is always 0.

The main diffrenece with JIRA ticket management is that it operates using information from code itself.

For example, it automatically tracks fixed and newly detected issues by a feature called historical defect merging.


We're running Sonar in our enterprise so this is an attractive integration. Currently it's reporting all sorts of stuff on our Java codebase that the team seems to be ignoring. Anyone have good tips on enforcing compliance?


Make the build fail if Sonar fails. Whoever breaks the build must fix it.


I've tried it with a bunch of files of an open source project of mine. It's in coffee script 2 so I've converted it to ES6 first. It has found many minor problems, and a couple of potential bugs. Definitely useful!


Where is the pricing page? Is it free?


from https://deepscan.io/getting-started/

Pricing

We are in BETA stage and do not provide paid plans yet.

But open source projects are free and one private project is for your trial experience.

If you're interested in using DeepScan for more private projects, please contact us.


Ok thanks! I was looking for a pricing page in fact :)


Does it catch security problems because this code returned 0 issues?

  var lave = eval(atob(document.location.hash.slice(1)))    
  console.log(lave.name)


No. DeepScan concentrates on runtime errors and does not cover security issues currently.


Is it possible to run this as a npm package against local Git repositories? Or is that in the roadmap?


We have a npm cli pacakage but currently provide it to limited partners only. In the future, we are considering to include it under paid plan.


looks interesting. What are the advantages of running such analysis on GitHub rather than locally?


It mentions 'Editor Plugins' on the page so I suppose you can run this locally too.

One Advantage I can think of of running such analysis on GitHub would be automatically analyzing pull requests.


So this is only available for github projects? Any plans on including others?


We are planning to support Bitbucket and GitLab in long term.

You can check out more detail in https://deepscan.io/docs/get-started/basics/#non-github


Curious, how does this compare to a statically typed language compiler?


While DeepScan finds some type-related errors, it focuses on finding issues orthogonal to type checking.

For comparison with Java, DeepScan is like FindBugs.

Check the reply that I have wrote for shamas.


Since the landing page mentions "semantic analysis" and based on the BAD_MIN_MAX_FUNC example in another comment, I think this is mostly focusing on logic errors that would pass a type checker/compiler without problem but show incorrect behavior at runtime. A similar product would be PVS-Studio https://www.viva64.com/en/pvs-studio/ (I haven't used it, but their articles occasionally get posted to HN.)


Dang, my projects are in gitlab. Any word on gitlab integration ?


Sorry, no gitlab integration yet. It is on the next year roadmap.

In the meantime, you can try editor plugins if you are using Visual Studio Code or Atom.

Check https://deepscan.io/docs/get-started/basics/#non-github for details.


Write a guide on how to integrate it into an editor!


If you are using Atom or Visual Studio Code, you can try the following plugins:

- Visual Studio Code: https://marketplace.visualstudio.com/items?itemName=DeepScan...

- Atom: https://atom.io/packages/atom-deepscan


kinda useful from the one file I tried!!


interesting tool




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact