Переходьте в офлайн за допомогою програми Player FM !
424: The Spectrum of Automated Processes for Your Dev Team
Manage episode 415396766 series 1401614
Joël shares his experience with the dry-rb suite of gems, focusing on how he's been using contracts to validate input data. Stephanie relates to Joël's insights with her preparation for RailsConf, discussing her methods for presenting code in slides and weighing the aesthetics and functionality of different tools like VS Code and Carbon.sh. She also encounters a CI test failure that prompts her to consider the implications of enforcing specific coding standards through CI processes.
The conversation turns into a discussion on managing coding standards and tools effectively, ensuring that automated systems help rather than hinder development. Joël and Stephanie ponder the balance between enforcing strict coding standards through CI and allowing developers the flexibility to bypass specific rules when necessary, ensuring tools provide valuable feedback without becoming obstructions.
- dry-rb
- A broader take on parsing
- Parse; don’t validate
- Debugging at the boundaries
- Specialized vocabulary
- Carbon
- RailsConf 2024
- Moving errors to the left
- Contracts
- Linters shouldn’t be optional
- Linter rules to avoid focused tests
- Thoughtbot Rails guides
- Danger
Transcript:
AD:
We're excited to announce a new workshop series for helping you get that startup idea you have out of your head and into the world. It's called Vision to Value. Over a series of 90-minute working sessions, you'll work with a thoughtbot product strategist and a handful of other founders to start testing your idea in the market and make a plan for building an MVP.
Join for all seven of the weekly sessions, or pick and choose the ones that address your biggest challenge right now. Learn more and sign up at tbot.io/visionvalue.
STEPHANIE: Hello and welcome to another episode of the Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn.
JOËL: And I'm Joël Quenneville. And together, we're here to share a bit of what we've learned along the way.
STEPHANIE: So, Joël, what's new in your world?
JOËL: I've been working on a project that uses the dry-rb suite of gems. And one of the things we're doing there is we're validating inputs using this concept of a contract. So, you sort of describe the shape and requirements of this, like hash of attributes that you get, and it will then tell you whether it's valid or not, along with error messages. We then want to use those to eventually build some other sort of value object type things that we use in the app. And because there's, like, failure points at multiple places that you have to track, it gets a little bit clunky.
And I got to thinking a little bit about, like, forget about the internal machinery. What is it that I would actually like to happen here? And really, what I want is to say, I've got this, like, bunch of attributes, which may or may not be correct. I want to pass them into a method, and then either get back a value object that I was hoping to construct or some kind of error.
STEPHANIE: That sounds reasonable to me.
JOËL: And then, thinking about it just a little bit longer, I was like, wait a minute, this idea of, like, unstructured input goes into a method, you get back something more structured or an error, that's kind of the broad definition of parsing. I think what I'm looking for is a parser object. And this really fits well with a style of processing popularized in the functional programming community called parse, don't validate the idea that you use a parser like this to sort of transform data from more loose to more strict values, values where you can have more assumptions.
And so, I create an object, and I can take a contract. I can take a class and say, "Attempt to take the following attributes. If they're valid according to the construct, create this classroom." And it, you know, does a bunch of error handling and some...under the hood, dry-rb does all this monad stuff. So, I handled that all inside of the object, but it's actually really nice.
STEPHANIE: Cool. Yeah, I had a feeling that was where you were going to go. A while back, we had talked about really impactful articles that we had read over the course of the year, and you had shared one called Parse, Don't Validate. And that heuristic has actually been stuck in my head a little bit. And that was really cool that you found an opportunity to use it in, you know, previously trying to make something work that, like, you weren't really sure kind of how you wanted to implement that.
JOËL: I think I had a bit of a light bulb moment as I was trying to figure this out because, in my mind, there are sort of two broad approaches. There's the parse, don't validate where you have some inputs, and then you transform them into something stricter. Or there's more of that validation approach where you have inputs, you verify that they're correct, and then you pass them on to someone else. And you just say, "Trust me, I verified they're in the right shape." Dry-rb sort of contracts feel like they fit more under that validation approach rather than the parse, don't validate.
Where I think the kind of the light bulb turned on for me is the idea that if you pair a validation step and an object construction step, you've effectively approximated the idea of parse, don't validate. So, if I create a parser object that says, in sort of one step, I'm going to validate some inputs and then immediately use them if they're valid to construct an object, then I've kind of done a parse don't validate, even though the individual building blocks don't follow that pattern.
STEPHANIE: More like a parse and validate, if you will [laughs]. I have a question for you. Like, do you own those inputs kind of in your domain?
JOËL: In this particular case, sort of. They're coming from a form, so yes. But it's user input, so never trust that.
STEPHANIE: Gotcha.
JOËL: I think you can take this idea and go a little bit broader as well. It doesn't have to be, like, the dry-rb-related stuff. You could do, for example, a JSON schema, right? You're dealing with the input from a third-party API, and you say, "Okay, well, I'm going to have a sort of validation JSON schema." It will just tell you, "Is this data valid or not?" and give you some errors.
But what if you paired that with construction and you could create a little parser object, if you wanted to, that says, "Hey, I've got a payload coming in from a third-party API, validate it against this JSON schema, and attempt to construct this shopping cart object, and give me an error otherwise." And now you've sort of created a nice, little parse, don't validate pipeline which I find a really nice way to deal with data like that.
STEPHANIE: From a user perspective, I'm curious: Does this also improve the user experience? I'm kind of wondering about that. It seems like it could. But have you explored that?
JOËL: This is more about the developer experience.
STEPHANIE: Got it.
JOËL: The user experience, I think, would be either identical or, you know, you can play around with things to display better errors. But this is more about the ergonomics on the development side of things. It was a little bit clunky to sort of assemble all the parts together. And sometimes we didn't immediately do both steps together at the same time. So, you might sort of have parameters that we're like, oh, these are totally good, we promise. And we pass them on to someone else, who passes them on to someone else. And then, they might try to do something with them and hope that they've got the data in the right shape.
And so, saying, let's co-locate these two things. Let's say the validation of the inputs and then the creation of some richer object happen immediately one after another. We're always going to bundle them together. And then, in this particular case, because we're using dry-rb, there's all this monad stuff that has to happen. That was a little bit clunky. We've sort of hidden that in one object, and then nobody else ever has to deal with that.
So, it's easier for developers in terms of just, if you want to turn inputs into objects, now you're just passing them into one object, into one, like, parser, and it works. But it's a nicer developer experience, but also there's a little bit more safety in that because now you're sort of always working with these richer objects that have been validated.
STEPHANIE: Yeah, that makes sense. It sounds very cohesive because you've determined that these are two things that should always happen together. The problems arise when they start to actually get separated, and you don't have what you need in terms of using your interfaces. And that's very nice that you were able to bundle that in an abstraction that makes sense.
JOËL: A really interesting thing I think about abstractions is sometimes thinking of them as the combination of multiple other things. So, you could say that the combination of one thing and another thing, and all of a sudden, you have a new sort of combo thing that you have created. And, in this case, I think the combination of input validation and construction, and, you know, to a certain extent, error handling, so maybe it's a combination of three things gives you a thing you can call a parser. And knowing that that combination is a thing you can put a name on, I think, is really powerful, or at least it felt really powerful to me when that light bulb turned on.
STEPHANIE: Yeah, it's kind of like the whole is greater than the sum of its parts.
JOËL: Yeah.
STEPHANIE: Cool.
JOËL: And you and I did an episode on Specialized Vocabulary a while back. And that power of naming, saying that, oh, I don't just have a bunch of little atomic steps that do things. But the fact that the combination of three or four of them is a thing in and of itself that has a name that we can talk about has properties that we're familiar with, all of a sudden, that is a really powerful way to think about a system.
STEPHANIE: Absolutely. That's very exciting.
JOËL: So, Stephanie, what's new in your world?
STEPHANIE: So, I am plugging away at my RailsConf talk, and I reached the point where I'm starting to work on slides. And this talk will be the first one where I have a lot of code that I want to present on my slides. And so, I've been playing around with a couple of different tools to present code on slides or, I guess, you know, just being able to share code outside of an editor. And the two tools I'm trying are...VS Code actually has a copy with syntax functionality in its command palette. And so, that's cool because it basically, you know, just takes your editor styling and applies it wherever you paste that code snippet.
JOËL: Is that a screenshot or that's, like, formatted text that you can paste in, like, a rich text editor?
STEPHANIE: Yeah, it's the latter.
JOËL: Okay.
STEPHANIE: That was nice because if I needed to make changes in my slides once I had already put them there, I could do that. But then the other tool that I was giving a whirl is Carbon.sh. And that one, I think, is pretty popular because it looks very slick. It kind of looks like a little Mac window and is very minimal. But you can paste your code into their text editor, and then you can export PNGs of the code. So, those are just screenshots rather than editable text. And I [chuckles] was using that, exported a bunch of screenshots of all of my code in various stages, and then realized I had a typo [laughs].
JOËL: Oh no!
STEPHANIE: Yeah, so I have not got around to fixing that yet. That was pretty frustrating because now I would have to go back and regenerate all of those exports. So, that's kind of where I'm at in terms of exploring sharing code. So, if anyone has any other tools that they would use and recommend, I am all ears.
JOËL: How do you feel about balancing sort of the quantity of code that you put on a slide? Do you tend to go with, like, a larger code slide and then maybe, like, highlight certain sections? Do you try to explain ideas in general and then only show, like, a couple of lines? Do you show, like, maybe a class that's got ten lines, and that's fine? Where do you find that balance in terms of how much code to put on a slide? Because I feel like that's always the big dilemma for me.
STEPHANIE: Yeah. Since this is my first time doing it, like, I really have no idea how it's going to turn out. But what I've been trying is focusing more on changes between each slide, so the progression of the code. And then, I can, hopefully, focus more on what has changed since the last snippet of code we were looking at. That has also required me to be more fiddly with the formatting because I don't want essentially, like, the window that's containing the code to be changing sizes [laughs] in between slide transitions. So, that was a little bit finicky.
And then, there's also a few other parts where I am highlighting with, like, a border or something around certain texts that I will probably pause and talk about, but yeah, it's tough. I feel like I've seen it done well, but it's a lot harder to and a lot more effort to [laughs] do in practice, I'm finding.
JOËL: When someone does it well, it looks effortless. And then, when somebody does it poorly, you're like, okay, I'm struggling to connect with this talk.
STEPHANIE: Yep. Yep. I hear that. I don't know if you would agree with this, but I get the sense that people who are able to make that look effortless have, like, a really deep and thorough understanding of the code they're showing and what exactly they think is important for the audience to pay attention to and understand in that given moment in their talk. That's the part that I'm finding a lot more work [laughs] because just thinking about, you know, the code I'm showing from a different lens or perspective.
JOËL: How do you sort of shrink it down to only what's essential for the point that you're trying to make? And then, more broadly, not just the point you're trying to make on this one slide, but how does this one slide fit into the broader narrative of the story you're trying to tell?
STEPHANIE: Right. So, we'll see how it goes for me. I'm sure it's one of those things that takes practice and experience, and this will be my first time, and we'll learn something from it.
JOËL: That's exciting. So, this is RailsConf in Detroit this year, I believe, May 7th through 9th.
STEPHANIE: Yep. That's right. So, recently on my client work, I encountered a CI failure on a PR of mine that I was surprised by. And basically, I had introduced a new association on a model, and this CI failure was saying like, "Hey, like, we see that you introduced this association. You should consider adding this to the presenter for this model." And I hadn't even known that that presenter existed [laughs]. So, it was kind of interesting to get a CI failure nudging me to consider if I need to be, like, making a different, you know, this other change somewhere else.
JOËL: That's a really fun use of CI. Do you think that was sort of helpful for you as a newer person on that codebase? Or was it more kind of annoying and, like, okay, this CI is over the top?
STEPHANIE: You know, I'm not sure [laughs]. For what it's worth, this presenter was actually for their admin dashboard, essentially. And so, the goal of what this workflow was trying to do was help folks who are using the admin dashboard have, like, all of the capabilities they need to do that job. And it makes sense that as you add behavior to your app, sometimes those things could get missed in terms of supporting, you know, not just your customers but developers, support product, you know, the other users of your app.
So, it was cool. And that was, you know, something that they cared enough to enforce. But yeah, I think there maybe is a bit of a slippery slope or at least some kind of line, or it might even be pretty blurry around what should our test failures really be doing.
JOËL: And CI is interesting because it can be a lot more than just tests. You can run all sorts of things. You can run a linter that fails. You could run various code quality tools that are not things like unit tests. And I think those are all valid uses of the CI process. What's interesting here is that it sounds like there were two systems that needed to stay in sync. And this particular CI check was about making sure that we didn't accidentally introduce code that would sort of drift apart in those two places. Does that sound about right?
STEPHANIE: Yeah, that does sound right. I think where it gets a little fuzzy, for me, is whether that kind of check was for code quality, was for a standard, or for a policy, right? It was kind of saying like, hey, like, this is the way that we've enforced developers to keep those two things from drifting. Whereas I think that could be also handled in different ways, right?
JOËL: Yeah. I guess in terms of, like, keeping two things in sync, I like to do that at almost, like, a code level, if possible. I mean, maybe you need a single source of truth, and then it just sort of happens automatically. Otherwise, maybe doing it in a way that will yell at you. So, you know, maybe there's a base class somewhere that will raise an error, and that will get caught by CI, or, you know, when you're manually testing and like, oh yeah, I need to keep this thing in sync. Maybe you can derive some things or get fancy with metaprogramming.
And the goal here is you don't have a situation where someone adds a new file in one place and then they accidentally break an admin dashboard because they weren't aware that you needed these two files to be one-to-one. If I can't do it just at a code level, I have done that before at, like, a unit test level, where maybe there's, like, a constant somewhere, and I just want to assert that every item in this constant array has a matching entry somewhere else or something like that, so that you don't end up effectively crashing the site for someone else because that is broken behavior.
STEPHANIE: Yeah, in this particular case, it wasn't necessarily broken. It was asking you "Hey, should this be added to the admin presenter?" which I thought was interesting. But I also hear what you're saying. It actually does remind me of what we were talking about earlier when you've identified two things that should happen, like mostly together and whether the code gives you affordances to do that.
JOËL: So, one of the things you said is really interesting, the idea that adding to the presenter might have been optional. Does that mean that CI failed for you but that you could merge anyway, or how does that work?
STEPHANIE: Right. I should have been more clear. This was actually a test failure, you know, that happened to be caught by CI because I don't run [laughs] the whole test suite locally.
JOËL: But it's an optional test failure, so you're allowed to let that test fail.
STEPHANIE: Basically, it told me, like, if I want this to be shown in the presenter, add it to this method, or if not, add it to...it was kind of like an allow list basically.
JOËL: I see.
STEPHANIE: Or an ignore list, yeah.
JOËL: I think that kind of makes sense because now you have sort of, like, a required consistency thing. So, you say, "Our system requires you...whenever you add a file in this directory, you must add it to either an allow list or an ignore list, which we have set up in this other file." And, you know, sometimes you might forget, or sometimes you're new, and it's your first time adding a file in this directory, and you didn't remember there's a different place where you have to effectively register it. That seems like a reasonable check to have in place if you're relying on these sort of allow lists for other parts of the system, and you need to keep them in sync.
STEPHANIE: So, I think this is one of the few instances where I might disagree with you, Joël. What I'm thinking is that it feels a bit weird to me to enforce a decision that was so far away from the code change that I made. You know, you're right. On one hand, I am newer to this codebase, maybe have less of that context of different features, things that need to happen. It's a big app. But I almost think this test reinforces this weird coupling of things that are very far away from each other [laughs].
JOËL: So, it's maybe not the test itself you object to rather than the general architecture where these admin presenters are relying on these other objects. And by you introducing a file in a totally different part of the app, there's a chance that you might break the admin, and that feels weird to you.
STEPHANIE: Yeah, that does feel weird to me. And then, also that this implementation is, like, codified in this test, I guess, as opposed to a different kind of, like, acceptance test, rather than specifying specifically like, oh, I noticed, you know, you didn't add this new association or attribute to either the allow list or the ignore list. Maybe there is a more, like, higher level test that could steer us in keeping the features consistent without necessarily dictating, like, that it needs to happen in these particular methods.
JOËL: So, you're talking something like doing an integration test rather than a unit test? Or are you talking about something entirely different?
STEPHANIE: I think it could be an integration test or a system test. I'm not sure exactly. But I am wondering what options, you know, are out there for helping keeping standards in place without necessarily, like, prescribing too much about, like, how it needs to be done.
JOËL: So, you used the word standard here, which I tend to think about more in terms of, like, code style, things like that. What you're describing here feels a little bit less like a standard and more of what I would call a code invariant.
STEPHANIE: Ooh.
JOËL: It's sort of like in this architecture the way we've set up, there must always be sort of one-to-one matching between files in this directory and entries in this array. Now, that's annoying because they're sort of, like, two different places, and they can vary independently. So, locking those two in sync requires you to do some clunky things, but that's sort of the way the architecture has been designed. These two things must remain one-to-one. This is an invariant we want in the app.
STEPHANIE: Can you define invariant for me [laughs], the way that you're using it here?
JOËL: Yeah, so something that is required to be true of all elements in this class of things, sort of a rule or a law that you're applying to the way that these particular bits of code need to behave. So, in this case, the invariant is every file in this directory must have a matching entry in this array. There's a lot of ways to enforce that. The sort of traditional idea is sort of pushing a lot of that checking...they'll sometimes talk about pushing errors to the left. So, if you can handle this earlier in the sort of code execution pipeline, can you do it maybe with a type system if you're in a type language? Can you do it with some sort of input validation at runtime?
Some languages have the concept of contracts, so maybe you enforce invariants using that. You could even do something really ad hoc in Ruby, where you might say, "Hey, at boot time, when we load this particular array for the admin, just load this directory. Make sure that the entries in the array match the entries in the directory, and if they don't, raise an error." And I guess you would catch that probably in CI just because you tried to run your test suite, and you'd immediately get this boot error because the entries don't match.
So, I guess it kind of gets [inaudible 22:36] CI, but now it's not really a dedicated test anymore. It's more of, like, a property of the system. And so, in this case, I've sort of shifted the error checking or the checking of this invariant more into the architecture itself rather than in, like, things that exercise the architecture. But you can go the other way and say, "Well, let's shift it out of the architecture into tests," or maybe even beyond that, into, like, manual QA or, you know, other things that you can do to verify it.
STEPHANIE: Hmm. That is very compelling to me.
JOËL: So, we've been talking so far about the idea of invariants, but the thing about invariants is that they don't vary. They're always true. This is a sort of fundamental rule of how this system works. The class of problems that I often struggle with how to deal with in these sorts of situations are rules that you only sometimes want to apply. They're not consistent. Have you ever run into things like that?
STEPHANIE: Yeah, I have. And I think that's what was compelling to me about what you were sharing about code invariance because I wasn't totally convinced this particular situation was a very clear and absolute rule that had been decided, you know, it seemed a little bit more ambiguous.
When you're talking about, like, applying rules that sometimes you actually don't want to apply, I think of things like linters, where we want to disable, you know, certain rules because we just can't get around implementing the way we want to while following those standards. Or maybe, you know, sometimes you just have to do something that is not accessible [laughs], not that that's what I would recommend, but in the case where there aren't other levers to change, you maybe want to disable some kind of accessibility check.
JOËL: That's always interesting, right? Because sometimes, you might want, like, the idea of something that has an escape hatch in it, but that immediately adds a lot of complexity to things as well. This is getting into more controversial territory. But I read a really compelling article by Jeroen Engels about how being able to, like, locally disable your linter for particular methods actually makes your code, but also the linter itself, a worse tool. And it really kind of made me rethink a little bit of how I approach linters as a tool.
STEPHANIE: Ooh.
JOËL: And what makes sense in a linter.
STEPHANIE: What was the argument for the linter being a worse tool by doing that?
JOËL: You know, it's funny that you ask because now I can't remember, and it's been a little while since I've read the article.
STEPHANIE: I'll have to revisit it after the show [laughs].
JOËL: Apparently, I didn't do the homework for this episode, but we'll definitely link to that article in the show notes.
STEPHANIE: So, how do you approach either introducing a new rule to something like a linter or maybe reconsidering an existing rule? Like, how would you go about finding, like, consensus on that from your team?
JOËL: That varies a lot by organizational culture, right? Some places will do it top-down, some of them will have a broader conversation and come to a consensus. And sometimes you just straight up don't get a choice. You're pulling in a tool like standard rb, and you're saying, "Look, we don't want to have a discussion about every little style thing, so whatever, you know, the community has agreed on for the standard rb linter is the style we're using. There are no discussions. Do what the linter tells you."
STEPHANIE: Yeah, that's true. I think I have to adapt to whatever, you know, client culture is like when I join new projects. You know, sometimes I do see people being like, "Hey, I think it's kind of weird that we have this," or, "Hey, I've noticed, for example, oh, we're merging focused RSpec tests. Like, let's introduce a rule to make sure that that doesn't happen."
I also think that a different approach is for those things not to be enforced at all by automation, but we, you know, there are still guidelines. I think the thoughtbot guides are an example of pretty opinionated guidelines around style and syntax. But I don't think that those kinds of things would, you know, ever be, like, enforced in a way that would be blocking.
JOËL: Those are kind of hard because they're not as consistent as you would think, so it's not a rule you can apply every time. It's more of a, here's some things to maybe keep in mind. Or if you're writing code in this way, think about some of the edge cases that might happen, or don't default to writing it in this way because things might go wrong. Make sure you know what you're doing. I love the phrase, "Must be able to justify this," or sometimes, "Must convince your pair that this is okay." So, default to writing in style A, avoid style B unless you can have a compelling reason to do so and can articulate that on your PR or, you know, convince your pair that that's the right way to go.
STEPHANIE: Interesting. It's kind of like the honor system, then [laughs].
JOËL: And I think that's sort of the general way when you're working with developers, right? There's a lot of areas where there is ambiguity. There is no single best way to do it. And so, you rely on people's expertise to build systems that work well. There are some things where you say, look, having conversations about these things is not useful. We want to have some amount of standardization or uniformity about certain things. Maybe there's invariance you want to hold. Maybe there's certain things we're, like, this should never get to production.
Whenever you've got these, like, broad sweeping statements about things should be always true or never true, that's a great time to introduce something like a linting rule. When it's more up to personal judgment, and you just want to nudge that judgment one way or another, then maybe it's better to have something like a guide.
STEPHANIE: Yeah, what I'm hearing is there is a bit of a spectrum.
JOËL: For sure. From things that are always true to things that are, like, sometimes true. I think I'm sort of curious about the idea of going a level beyond that, though, beyond things like just code style or maybe even, like, invariance you want to hold or something, being able to make suggestions to developers based off the code that is written. So, now you're applying more like heuristics, but instead of asking a human to apply those heuristics at code review time and leave some comments, maybe there's a way to get automated feedback from a tool.
STEPHANIE: Yeah, I think we had mentioned code analysis tools earlier because some teams and organizations include those as part of their CI builds, right? And, you know, even Brakeman, right? Like, that's an analysis tool for security. But I can't recall if I've seen an organization use things like Flog metrics which measure code complexity in things like that. How would you feel if that were a check that was blocking your work?
JOËL: So, I've seen things like that be used if you're using, like, the Code Climate plugin for GitHub. And Code Climate internally does effectively flog and other things that are fancier on your code quality. And so, you can set a threshold to say, hey, if complexity gets higher than a certain amount, fail the build.
You can also...if you're doing things via GitHub, what's nice is that you can do effectively non-blocking comments. So, instead of failing CI to say, "Hey, this method looks really complex. You cannot merge until you have made this method less complex," maybe the sort of, like, next step up in ambiguity is to just leave a comment on a PR from a tool and say, "Hey, this method here is looking really complex. Consider breaking it up."
STEPHANIE: Yeah, there is a tool that I've seen but not used called Danger, and its tagline is, Stop saying, "You forgot to..." in code review [laughs]. And it basically does that, what you were saying, of, like, leaving probably a suggestion. I can imagine it's blocking, but a suggestive comment that just automates that rather than it being a manual process that humans have to remember or notice.
JOËL: And there's a lot of things that could be specific to your organization or your architecture. So, you say, "Hey, you introduced a file here. Would you consider also making an entry to this presenter file so that it's editable on the admin?" And maybe that's a better place to handle that. Just a comment. But you wouldn't necessarily want every code reviewer to have to think about that.
STEPHANIE: So, I do think that I am sometimes not necessarily suspicious, but I have also seen tools like that end up just getting in the way, and it just becomes something you ignore. It's something you end up always using the escape hatch for, or people just find ways around it because they're harming more than they're helping. Do you have any thoughts about how to kind of keep those things in check and make sure that the tools we introduce genuinely are kind of helping the organization do the right thing rather than kind of being these perhaps arbitrary blockers?
JOËL: I'm going to throw a fancy phrase at you.
STEPHANIE: Ooh, I'm ready.
JOËL: Signal-to-noise ratio.
STEPHANIE: Whoa, uh-huh.
JOËL: So, how often is the feedback from your tool actually helpful, and how often is it just noise that you have to dismiss, or manually override, or things like that? At some point, the ratio becomes so much that you lose the signal in all the noise. And so, maybe you even, like, because you're always just ignoring the feedback from this tool, you accidentally start overriding things that would be genuinely helpful. And, at that point, you've got the worst of both worlds.
So, sort of keeping track on what that ratio is, and there's not, like, a magic number. I'm not going to tell you, "Oh, this is an 80/20 principle. You need to have, you know, 80% of the time it's useful and only 20% of the time it's not useful." I don't have a number to give you, but keeping track on maybe, you know, is it more often than not useful? Is your team getting to the point where they're just ignoring feedback from this tool? And thinking in terms of that signal versus that noise, I think is useful—to go back to that word again, heuristic for managing whether a tool is still helpful.
STEPHANIE: Yeah. And I would even go on to say that, you know, I always appreciate when people in leadership roles keep an eye on these things. And they're like, "Oh, I've been hearing that people are just totally numb to this tool [laughs]" or, you know, "There's no engagement on this. People are just ignoring those signals." Any developer impacted by this, it is valid to bring it up if you're getting frustrated by it or just finding yourself, you know, having all of these obstacles getting in the way of your development process.
JOËL: Sometimes, this can be a symptom that you're mixing too many classes of problems together in one tool. So, maybe there are things that are, like, really dangerous to your product to go live with them. Maybe it's, you know, something like Brakeman where you're doing security checks, and you really, ideally, would not go to production with a failing security check.
And then, you've got some random other style things in there, and you're just like, oh yeah, whatever, it's this tool because it's mostly style things but occasionally gives you a security problem. And because you ignore it all the time, now you accidentally go to production with a security problem. So, splitting that out and say, "Look, we've got blocking and unblocking because we recognize these two classes of problems can be a helpful solution to this problem."
STEPHANIE: Joël, did you just apply an object-oriented design principle to an organizational system?
[laughter]
JOËL: I may be too much of a developer.
STEPHANIE: Cool. Well, I really appreciate your input on this because, you know, I was just kind of mulling over, like, how I felt about these kinds of things that I encounter as a developer. And I am glad that we got to kind of talk about it. And I think it gives me a more expanded vocabulary to, you know, analyze or reflect when I encounter these things on different client organizations.
JOËL: And every organization is different, right? Like, you've got to learn the culture, learn the different elements of that software. What are the things that are invariant? What are the things that are dangerous that we don't want to ship without? What are the things that we're doing just for consistency? What are things which are, like, these are culturally things that we'd like to do? There's all these levels, and it's a lot to pick up.
STEPHANIE: Yeah. At the end of the day, I think what I really liked about the last thing you said was being able to identify the problem, like the class of problem, and applying the right tool for the right job. It helps me take a step back and perhaps even think of different solutions that we might not have thought about earlier because we had just gotten so used to the one way of enforcing or checking things like that.
JOËL: On that note, shall we wrap up?
STEPHANIE: Let's wrap up. Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at hosts@bikeshed.fm via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeee!!!!!!
AD:
Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us.
More info on our website at: tbot.io/referral. Or you can email us at: referrals@thoughtbot.com with any questions.
448 епізодів
Manage episode 415396766 series 1401614
Joël shares his experience with the dry-rb suite of gems, focusing on how he's been using contracts to validate input data. Stephanie relates to Joël's insights with her preparation for RailsConf, discussing her methods for presenting code in slides and weighing the aesthetics and functionality of different tools like VS Code and Carbon.sh. She also encounters a CI test failure that prompts her to consider the implications of enforcing specific coding standards through CI processes.
The conversation turns into a discussion on managing coding standards and tools effectively, ensuring that automated systems help rather than hinder development. Joël and Stephanie ponder the balance between enforcing strict coding standards through CI and allowing developers the flexibility to bypass specific rules when necessary, ensuring tools provide valuable feedback without becoming obstructions.
- dry-rb
- A broader take on parsing
- Parse; don’t validate
- Debugging at the boundaries
- Specialized vocabulary
- Carbon
- RailsConf 2024
- Moving errors to the left
- Contracts
- Linters shouldn’t be optional
- Linter rules to avoid focused tests
- Thoughtbot Rails guides
- Danger
Transcript:
AD:
We're excited to announce a new workshop series for helping you get that startup idea you have out of your head and into the world. It's called Vision to Value. Over a series of 90-minute working sessions, you'll work with a thoughtbot product strategist and a handful of other founders to start testing your idea in the market and make a plan for building an MVP.
Join for all seven of the weekly sessions, or pick and choose the ones that address your biggest challenge right now. Learn more and sign up at tbot.io/visionvalue.
STEPHANIE: Hello and welcome to another episode of the Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn.
JOËL: And I'm Joël Quenneville. And together, we're here to share a bit of what we've learned along the way.
STEPHANIE: So, Joël, what's new in your world?
JOËL: I've been working on a project that uses the dry-rb suite of gems. And one of the things we're doing there is we're validating inputs using this concept of a contract. So, you sort of describe the shape and requirements of this, like hash of attributes that you get, and it will then tell you whether it's valid or not, along with error messages. We then want to use those to eventually build some other sort of value object type things that we use in the app. And because there's, like, failure points at multiple places that you have to track, it gets a little bit clunky.
And I got to thinking a little bit about, like, forget about the internal machinery. What is it that I would actually like to happen here? And really, what I want is to say, I've got this, like, bunch of attributes, which may or may not be correct. I want to pass them into a method, and then either get back a value object that I was hoping to construct or some kind of error.
STEPHANIE: That sounds reasonable to me.
JOËL: And then, thinking about it just a little bit longer, I was like, wait a minute, this idea of, like, unstructured input goes into a method, you get back something more structured or an error, that's kind of the broad definition of parsing. I think what I'm looking for is a parser object. And this really fits well with a style of processing popularized in the functional programming community called parse, don't validate the idea that you use a parser like this to sort of transform data from more loose to more strict values, values where you can have more assumptions.
And so, I create an object, and I can take a contract. I can take a class and say, "Attempt to take the following attributes. If they're valid according to the construct, create this classroom." And it, you know, does a bunch of error handling and some...under the hood, dry-rb does all this monad stuff. So, I handled that all inside of the object, but it's actually really nice.
STEPHANIE: Cool. Yeah, I had a feeling that was where you were going to go. A while back, we had talked about really impactful articles that we had read over the course of the year, and you had shared one called Parse, Don't Validate. And that heuristic has actually been stuck in my head a little bit. And that was really cool that you found an opportunity to use it in, you know, previously trying to make something work that, like, you weren't really sure kind of how you wanted to implement that.
JOËL: I think I had a bit of a light bulb moment as I was trying to figure this out because, in my mind, there are sort of two broad approaches. There's the parse, don't validate where you have some inputs, and then you transform them into something stricter. Or there's more of that validation approach where you have inputs, you verify that they're correct, and then you pass them on to someone else. And you just say, "Trust me, I verified they're in the right shape." Dry-rb sort of contracts feel like they fit more under that validation approach rather than the parse, don't validate.
Where I think the kind of the light bulb turned on for me is the idea that if you pair a validation step and an object construction step, you've effectively approximated the idea of parse, don't validate. So, if I create a parser object that says, in sort of one step, I'm going to validate some inputs and then immediately use them if they're valid to construct an object, then I've kind of done a parse don't validate, even though the individual building blocks don't follow that pattern.
STEPHANIE: More like a parse and validate, if you will [laughs]. I have a question for you. Like, do you own those inputs kind of in your domain?
JOËL: In this particular case, sort of. They're coming from a form, so yes. But it's user input, so never trust that.
STEPHANIE: Gotcha.
JOËL: I think you can take this idea and go a little bit broader as well. It doesn't have to be, like, the dry-rb-related stuff. You could do, for example, a JSON schema, right? You're dealing with the input from a third-party API, and you say, "Okay, well, I'm going to have a sort of validation JSON schema." It will just tell you, "Is this data valid or not?" and give you some errors.
But what if you paired that with construction and you could create a little parser object, if you wanted to, that says, "Hey, I've got a payload coming in from a third-party API, validate it against this JSON schema, and attempt to construct this shopping cart object, and give me an error otherwise." And now you've sort of created a nice, little parse, don't validate pipeline which I find a really nice way to deal with data like that.
STEPHANIE: From a user perspective, I'm curious: Does this also improve the user experience? I'm kind of wondering about that. It seems like it could. But have you explored that?
JOËL: This is more about the developer experience.
STEPHANIE: Got it.
JOËL: The user experience, I think, would be either identical or, you know, you can play around with things to display better errors. But this is more about the ergonomics on the development side of things. It was a little bit clunky to sort of assemble all the parts together. And sometimes we didn't immediately do both steps together at the same time. So, you might sort of have parameters that we're like, oh, these are totally good, we promise. And we pass them on to someone else, who passes them on to someone else. And then, they might try to do something with them and hope that they've got the data in the right shape.
And so, saying, let's co-locate these two things. Let's say the validation of the inputs and then the creation of some richer object happen immediately one after another. We're always going to bundle them together. And then, in this particular case, because we're using dry-rb, there's all this monad stuff that has to happen. That was a little bit clunky. We've sort of hidden that in one object, and then nobody else ever has to deal with that.
So, it's easier for developers in terms of just, if you want to turn inputs into objects, now you're just passing them into one object, into one, like, parser, and it works. But it's a nicer developer experience, but also there's a little bit more safety in that because now you're sort of always working with these richer objects that have been validated.
STEPHANIE: Yeah, that makes sense. It sounds very cohesive because you've determined that these are two things that should always happen together. The problems arise when they start to actually get separated, and you don't have what you need in terms of using your interfaces. And that's very nice that you were able to bundle that in an abstraction that makes sense.
JOËL: A really interesting thing I think about abstractions is sometimes thinking of them as the combination of multiple other things. So, you could say that the combination of one thing and another thing, and all of a sudden, you have a new sort of combo thing that you have created. And, in this case, I think the combination of input validation and construction, and, you know, to a certain extent, error handling, so maybe it's a combination of three things gives you a thing you can call a parser. And knowing that that combination is a thing you can put a name on, I think, is really powerful, or at least it felt really powerful to me when that light bulb turned on.
STEPHANIE: Yeah, it's kind of like the whole is greater than the sum of its parts.
JOËL: Yeah.
STEPHANIE: Cool.
JOËL: And you and I did an episode on Specialized Vocabulary a while back. And that power of naming, saying that, oh, I don't just have a bunch of little atomic steps that do things. But the fact that the combination of three or four of them is a thing in and of itself that has a name that we can talk about has properties that we're familiar with, all of a sudden, that is a really powerful way to think about a system.
STEPHANIE: Absolutely. That's very exciting.
JOËL: So, Stephanie, what's new in your world?
STEPHANIE: So, I am plugging away at my RailsConf talk, and I reached the point where I'm starting to work on slides. And this talk will be the first one where I have a lot of code that I want to present on my slides. And so, I've been playing around with a couple of different tools to present code on slides or, I guess, you know, just being able to share code outside of an editor. And the two tools I'm trying are...VS Code actually has a copy with syntax functionality in its command palette. And so, that's cool because it basically, you know, just takes your editor styling and applies it wherever you paste that code snippet.
JOËL: Is that a screenshot or that's, like, formatted text that you can paste in, like, a rich text editor?
STEPHANIE: Yeah, it's the latter.
JOËL: Okay.
STEPHANIE: That was nice because if I needed to make changes in my slides once I had already put them there, I could do that. But then the other tool that I was giving a whirl is Carbon.sh. And that one, I think, is pretty popular because it looks very slick. It kind of looks like a little Mac window and is very minimal. But you can paste your code into their text editor, and then you can export PNGs of the code. So, those are just screenshots rather than editable text. And I [chuckles] was using that, exported a bunch of screenshots of all of my code in various stages, and then realized I had a typo [laughs].
JOËL: Oh no!
STEPHANIE: Yeah, so I have not got around to fixing that yet. That was pretty frustrating because now I would have to go back and regenerate all of those exports. So, that's kind of where I'm at in terms of exploring sharing code. So, if anyone has any other tools that they would use and recommend, I am all ears.
JOËL: How do you feel about balancing sort of the quantity of code that you put on a slide? Do you tend to go with, like, a larger code slide and then maybe, like, highlight certain sections? Do you try to explain ideas in general and then only show, like, a couple of lines? Do you show, like, maybe a class that's got ten lines, and that's fine? Where do you find that balance in terms of how much code to put on a slide? Because I feel like that's always the big dilemma for me.
STEPHANIE: Yeah. Since this is my first time doing it, like, I really have no idea how it's going to turn out. But what I've been trying is focusing more on changes between each slide, so the progression of the code. And then, I can, hopefully, focus more on what has changed since the last snippet of code we were looking at. That has also required me to be more fiddly with the formatting because I don't want essentially, like, the window that's containing the code to be changing sizes [laughs] in between slide transitions. So, that was a little bit finicky.
And then, there's also a few other parts where I am highlighting with, like, a border or something around certain texts that I will probably pause and talk about, but yeah, it's tough. I feel like I've seen it done well, but it's a lot harder to and a lot more effort to [laughs] do in practice, I'm finding.
JOËL: When someone does it well, it looks effortless. And then, when somebody does it poorly, you're like, okay, I'm struggling to connect with this talk.
STEPHANIE: Yep. Yep. I hear that. I don't know if you would agree with this, but I get the sense that people who are able to make that look effortless have, like, a really deep and thorough understanding of the code they're showing and what exactly they think is important for the audience to pay attention to and understand in that given moment in their talk. That's the part that I'm finding a lot more work [laughs] because just thinking about, you know, the code I'm showing from a different lens or perspective.
JOËL: How do you sort of shrink it down to only what's essential for the point that you're trying to make? And then, more broadly, not just the point you're trying to make on this one slide, but how does this one slide fit into the broader narrative of the story you're trying to tell?
STEPHANIE: Right. So, we'll see how it goes for me. I'm sure it's one of those things that takes practice and experience, and this will be my first time, and we'll learn something from it.
JOËL: That's exciting. So, this is RailsConf in Detroit this year, I believe, May 7th through 9th.
STEPHANIE: Yep. That's right. So, recently on my client work, I encountered a CI failure on a PR of mine that I was surprised by. And basically, I had introduced a new association on a model, and this CI failure was saying like, "Hey, like, we see that you introduced this association. You should consider adding this to the presenter for this model." And I hadn't even known that that presenter existed [laughs]. So, it was kind of interesting to get a CI failure nudging me to consider if I need to be, like, making a different, you know, this other change somewhere else.
JOËL: That's a really fun use of CI. Do you think that was sort of helpful for you as a newer person on that codebase? Or was it more kind of annoying and, like, okay, this CI is over the top?
STEPHANIE: You know, I'm not sure [laughs]. For what it's worth, this presenter was actually for their admin dashboard, essentially. And so, the goal of what this workflow was trying to do was help folks who are using the admin dashboard have, like, all of the capabilities they need to do that job. And it makes sense that as you add behavior to your app, sometimes those things could get missed in terms of supporting, you know, not just your customers but developers, support product, you know, the other users of your app.
So, it was cool. And that was, you know, something that they cared enough to enforce. But yeah, I think there maybe is a bit of a slippery slope or at least some kind of line, or it might even be pretty blurry around what should our test failures really be doing.
JOËL: And CI is interesting because it can be a lot more than just tests. You can run all sorts of things. You can run a linter that fails. You could run various code quality tools that are not things like unit tests. And I think those are all valid uses of the CI process. What's interesting here is that it sounds like there were two systems that needed to stay in sync. And this particular CI check was about making sure that we didn't accidentally introduce code that would sort of drift apart in those two places. Does that sound about right?
STEPHANIE: Yeah, that does sound right. I think where it gets a little fuzzy, for me, is whether that kind of check was for code quality, was for a standard, or for a policy, right? It was kind of saying like, hey, like, this is the way that we've enforced developers to keep those two things from drifting. Whereas I think that could be also handled in different ways, right?
JOËL: Yeah. I guess in terms of, like, keeping two things in sync, I like to do that at almost, like, a code level, if possible. I mean, maybe you need a single source of truth, and then it just sort of happens automatically. Otherwise, maybe doing it in a way that will yell at you. So, you know, maybe there's a base class somewhere that will raise an error, and that will get caught by CI, or, you know, when you're manually testing and like, oh yeah, I need to keep this thing in sync. Maybe you can derive some things or get fancy with metaprogramming.
And the goal here is you don't have a situation where someone adds a new file in one place and then they accidentally break an admin dashboard because they weren't aware that you needed these two files to be one-to-one. If I can't do it just at a code level, I have done that before at, like, a unit test level, where maybe there's, like, a constant somewhere, and I just want to assert that every item in this constant array has a matching entry somewhere else or something like that, so that you don't end up effectively crashing the site for someone else because that is broken behavior.
STEPHANIE: Yeah, in this particular case, it wasn't necessarily broken. It was asking you "Hey, should this be added to the admin presenter?" which I thought was interesting. But I also hear what you're saying. It actually does remind me of what we were talking about earlier when you've identified two things that should happen, like mostly together and whether the code gives you affordances to do that.
JOËL: So, one of the things you said is really interesting, the idea that adding to the presenter might have been optional. Does that mean that CI failed for you but that you could merge anyway, or how does that work?
STEPHANIE: Right. I should have been more clear. This was actually a test failure, you know, that happened to be caught by CI because I don't run [laughs] the whole test suite locally.
JOËL: But it's an optional test failure, so you're allowed to let that test fail.
STEPHANIE: Basically, it told me, like, if I want this to be shown in the presenter, add it to this method, or if not, add it to...it was kind of like an allow list basically.
JOËL: I see.
STEPHANIE: Or an ignore list, yeah.
JOËL: I think that kind of makes sense because now you have sort of, like, a required consistency thing. So, you say, "Our system requires you...whenever you add a file in this directory, you must add it to either an allow list or an ignore list, which we have set up in this other file." And, you know, sometimes you might forget, or sometimes you're new, and it's your first time adding a file in this directory, and you didn't remember there's a different place where you have to effectively register it. That seems like a reasonable check to have in place if you're relying on these sort of allow lists for other parts of the system, and you need to keep them in sync.
STEPHANIE: So, I think this is one of the few instances where I might disagree with you, Joël. What I'm thinking is that it feels a bit weird to me to enforce a decision that was so far away from the code change that I made. You know, you're right. On one hand, I am newer to this codebase, maybe have less of that context of different features, things that need to happen. It's a big app. But I almost think this test reinforces this weird coupling of things that are very far away from each other [laughs].
JOËL: So, it's maybe not the test itself you object to rather than the general architecture where these admin presenters are relying on these other objects. And by you introducing a file in a totally different part of the app, there's a chance that you might break the admin, and that feels weird to you.
STEPHANIE: Yeah, that does feel weird to me. And then, also that this implementation is, like, codified in this test, I guess, as opposed to a different kind of, like, acceptance test, rather than specifying specifically like, oh, I noticed, you know, you didn't add this new association or attribute to either the allow list or the ignore list. Maybe there is a more, like, higher level test that could steer us in keeping the features consistent without necessarily dictating, like, that it needs to happen in these particular methods.
JOËL: So, you're talking something like doing an integration test rather than a unit test? Or are you talking about something entirely different?
STEPHANIE: I think it could be an integration test or a system test. I'm not sure exactly. But I am wondering what options, you know, are out there for helping keeping standards in place without necessarily, like, prescribing too much about, like, how it needs to be done.
JOËL: So, you used the word standard here, which I tend to think about more in terms of, like, code style, things like that. What you're describing here feels a little bit less like a standard and more of what I would call a code invariant.
STEPHANIE: Ooh.
JOËL: It's sort of like in this architecture the way we've set up, there must always be sort of one-to-one matching between files in this directory and entries in this array. Now, that's annoying because they're sort of, like, two different places, and they can vary independently. So, locking those two in sync requires you to do some clunky things, but that's sort of the way the architecture has been designed. These two things must remain one-to-one. This is an invariant we want in the app.
STEPHANIE: Can you define invariant for me [laughs], the way that you're using it here?
JOËL: Yeah, so something that is required to be true of all elements in this class of things, sort of a rule or a law that you're applying to the way that these particular bits of code need to behave. So, in this case, the invariant is every file in this directory must have a matching entry in this array. There's a lot of ways to enforce that. The sort of traditional idea is sort of pushing a lot of that checking...they'll sometimes talk about pushing errors to the left. So, if you can handle this earlier in the sort of code execution pipeline, can you do it maybe with a type system if you're in a type language? Can you do it with some sort of input validation at runtime?
Some languages have the concept of contracts, so maybe you enforce invariants using that. You could even do something really ad hoc in Ruby, where you might say, "Hey, at boot time, when we load this particular array for the admin, just load this directory. Make sure that the entries in the array match the entries in the directory, and if they don't, raise an error." And I guess you would catch that probably in CI just because you tried to run your test suite, and you'd immediately get this boot error because the entries don't match.
So, I guess it kind of gets [inaudible 22:36] CI, but now it's not really a dedicated test anymore. It's more of, like, a property of the system. And so, in this case, I've sort of shifted the error checking or the checking of this invariant more into the architecture itself rather than in, like, things that exercise the architecture. But you can go the other way and say, "Well, let's shift it out of the architecture into tests," or maybe even beyond that, into, like, manual QA or, you know, other things that you can do to verify it.
STEPHANIE: Hmm. That is very compelling to me.
JOËL: So, we've been talking so far about the idea of invariants, but the thing about invariants is that they don't vary. They're always true. This is a sort of fundamental rule of how this system works. The class of problems that I often struggle with how to deal with in these sorts of situations are rules that you only sometimes want to apply. They're not consistent. Have you ever run into things like that?
STEPHANIE: Yeah, I have. And I think that's what was compelling to me about what you were sharing about code invariance because I wasn't totally convinced this particular situation was a very clear and absolute rule that had been decided, you know, it seemed a little bit more ambiguous.
When you're talking about, like, applying rules that sometimes you actually don't want to apply, I think of things like linters, where we want to disable, you know, certain rules because we just can't get around implementing the way we want to while following those standards. Or maybe, you know, sometimes you just have to do something that is not accessible [laughs], not that that's what I would recommend, but in the case where there aren't other levers to change, you maybe want to disable some kind of accessibility check.
JOËL: That's always interesting, right? Because sometimes, you might want, like, the idea of something that has an escape hatch in it, but that immediately adds a lot of complexity to things as well. This is getting into more controversial territory. But I read a really compelling article by Jeroen Engels about how being able to, like, locally disable your linter for particular methods actually makes your code, but also the linter itself, a worse tool. And it really kind of made me rethink a little bit of how I approach linters as a tool.
STEPHANIE: Ooh.
JOËL: And what makes sense in a linter.
STEPHANIE: What was the argument for the linter being a worse tool by doing that?
JOËL: You know, it's funny that you ask because now I can't remember, and it's been a little while since I've read the article.
STEPHANIE: I'll have to revisit it after the show [laughs].
JOËL: Apparently, I didn't do the homework for this episode, but we'll definitely link to that article in the show notes.
STEPHANIE: So, how do you approach either introducing a new rule to something like a linter or maybe reconsidering an existing rule? Like, how would you go about finding, like, consensus on that from your team?
JOËL: That varies a lot by organizational culture, right? Some places will do it top-down, some of them will have a broader conversation and come to a consensus. And sometimes you just straight up don't get a choice. You're pulling in a tool like standard rb, and you're saying, "Look, we don't want to have a discussion about every little style thing, so whatever, you know, the community has agreed on for the standard rb linter is the style we're using. There are no discussions. Do what the linter tells you."
STEPHANIE: Yeah, that's true. I think I have to adapt to whatever, you know, client culture is like when I join new projects. You know, sometimes I do see people being like, "Hey, I think it's kind of weird that we have this," or, "Hey, I've noticed, for example, oh, we're merging focused RSpec tests. Like, let's introduce a rule to make sure that that doesn't happen."
I also think that a different approach is for those things not to be enforced at all by automation, but we, you know, there are still guidelines. I think the thoughtbot guides are an example of pretty opinionated guidelines around style and syntax. But I don't think that those kinds of things would, you know, ever be, like, enforced in a way that would be blocking.
JOËL: Those are kind of hard because they're not as consistent as you would think, so it's not a rule you can apply every time. It's more of a, here's some things to maybe keep in mind. Or if you're writing code in this way, think about some of the edge cases that might happen, or don't default to writing it in this way because things might go wrong. Make sure you know what you're doing. I love the phrase, "Must be able to justify this," or sometimes, "Must convince your pair that this is okay." So, default to writing in style A, avoid style B unless you can have a compelling reason to do so and can articulate that on your PR or, you know, convince your pair that that's the right way to go.
STEPHANIE: Interesting. It's kind of like the honor system, then [laughs].
JOËL: And I think that's sort of the general way when you're working with developers, right? There's a lot of areas where there is ambiguity. There is no single best way to do it. And so, you rely on people's expertise to build systems that work well. There are some things where you say, look, having conversations about these things is not useful. We want to have some amount of standardization or uniformity about certain things. Maybe there's invariance you want to hold. Maybe there's certain things we're, like, this should never get to production.
Whenever you've got these, like, broad sweeping statements about things should be always true or never true, that's a great time to introduce something like a linting rule. When it's more up to personal judgment, and you just want to nudge that judgment one way or another, then maybe it's better to have something like a guide.
STEPHANIE: Yeah, what I'm hearing is there is a bit of a spectrum.
JOËL: For sure. From things that are always true to things that are, like, sometimes true. I think I'm sort of curious about the idea of going a level beyond that, though, beyond things like just code style or maybe even, like, invariance you want to hold or something, being able to make suggestions to developers based off the code that is written. So, now you're applying more like heuristics, but instead of asking a human to apply those heuristics at code review time and leave some comments, maybe there's a way to get automated feedback from a tool.
STEPHANIE: Yeah, I think we had mentioned code analysis tools earlier because some teams and organizations include those as part of their CI builds, right? And, you know, even Brakeman, right? Like, that's an analysis tool for security. But I can't recall if I've seen an organization use things like Flog metrics which measure code complexity in things like that. How would you feel if that were a check that was blocking your work?
JOËL: So, I've seen things like that be used if you're using, like, the Code Climate plugin for GitHub. And Code Climate internally does effectively flog and other things that are fancier on your code quality. And so, you can set a threshold to say, hey, if complexity gets higher than a certain amount, fail the build.
You can also...if you're doing things via GitHub, what's nice is that you can do effectively non-blocking comments. So, instead of failing CI to say, "Hey, this method looks really complex. You cannot merge until you have made this method less complex," maybe the sort of, like, next step up in ambiguity is to just leave a comment on a PR from a tool and say, "Hey, this method here is looking really complex. Consider breaking it up."
STEPHANIE: Yeah, there is a tool that I've seen but not used called Danger, and its tagline is, Stop saying, "You forgot to..." in code review [laughs]. And it basically does that, what you were saying, of, like, leaving probably a suggestion. I can imagine it's blocking, but a suggestive comment that just automates that rather than it being a manual process that humans have to remember or notice.
JOËL: And there's a lot of things that could be specific to your organization or your architecture. So, you say, "Hey, you introduced a file here. Would you consider also making an entry to this presenter file so that it's editable on the admin?" And maybe that's a better place to handle that. Just a comment. But you wouldn't necessarily want every code reviewer to have to think about that.
STEPHANIE: So, I do think that I am sometimes not necessarily suspicious, but I have also seen tools like that end up just getting in the way, and it just becomes something you ignore. It's something you end up always using the escape hatch for, or people just find ways around it because they're harming more than they're helping. Do you have any thoughts about how to kind of keep those things in check and make sure that the tools we introduce genuinely are kind of helping the organization do the right thing rather than kind of being these perhaps arbitrary blockers?
JOËL: I'm going to throw a fancy phrase at you.
STEPHANIE: Ooh, I'm ready.
JOËL: Signal-to-noise ratio.
STEPHANIE: Whoa, uh-huh.
JOËL: So, how often is the feedback from your tool actually helpful, and how often is it just noise that you have to dismiss, or manually override, or things like that? At some point, the ratio becomes so much that you lose the signal in all the noise. And so, maybe you even, like, because you're always just ignoring the feedback from this tool, you accidentally start overriding things that would be genuinely helpful. And, at that point, you've got the worst of both worlds.
So, sort of keeping track on what that ratio is, and there's not, like, a magic number. I'm not going to tell you, "Oh, this is an 80/20 principle. You need to have, you know, 80% of the time it's useful and only 20% of the time it's not useful." I don't have a number to give you, but keeping track on maybe, you know, is it more often than not useful? Is your team getting to the point where they're just ignoring feedback from this tool? And thinking in terms of that signal versus that noise, I think is useful—to go back to that word again, heuristic for managing whether a tool is still helpful.
STEPHANIE: Yeah. And I would even go on to say that, you know, I always appreciate when people in leadership roles keep an eye on these things. And they're like, "Oh, I've been hearing that people are just totally numb to this tool [laughs]" or, you know, "There's no engagement on this. People are just ignoring those signals." Any developer impacted by this, it is valid to bring it up if you're getting frustrated by it or just finding yourself, you know, having all of these obstacles getting in the way of your development process.
JOËL: Sometimes, this can be a symptom that you're mixing too many classes of problems together in one tool. So, maybe there are things that are, like, really dangerous to your product to go live with them. Maybe it's, you know, something like Brakeman where you're doing security checks, and you really, ideally, would not go to production with a failing security check.
And then, you've got some random other style things in there, and you're just like, oh yeah, whatever, it's this tool because it's mostly style things but occasionally gives you a security problem. And because you ignore it all the time, now you accidentally go to production with a security problem. So, splitting that out and say, "Look, we've got blocking and unblocking because we recognize these two classes of problems can be a helpful solution to this problem."
STEPHANIE: Joël, did you just apply an object-oriented design principle to an organizational system?
[laughter]
JOËL: I may be too much of a developer.
STEPHANIE: Cool. Well, I really appreciate your input on this because, you know, I was just kind of mulling over, like, how I felt about these kinds of things that I encounter as a developer. And I am glad that we got to kind of talk about it. And I think it gives me a more expanded vocabulary to, you know, analyze or reflect when I encounter these things on different client organizations.
JOËL: And every organization is different, right? Like, you've got to learn the culture, learn the different elements of that software. What are the things that are invariant? What are the things that are dangerous that we don't want to ship without? What are the things that we're doing just for consistency? What are things which are, like, these are culturally things that we'd like to do? There's all these levels, and it's a lot to pick up.
STEPHANIE: Yeah. At the end of the day, I think what I really liked about the last thing you said was being able to identify the problem, like the class of problem, and applying the right tool for the right job. It helps me take a step back and perhaps even think of different solutions that we might not have thought about earlier because we had just gotten so used to the one way of enforcing or checking things like that.
JOËL: On that note, shall we wrap up?
STEPHANIE: Let's wrap up. Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at hosts@bikeshed.fm via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeee!!!!!!
AD:
Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us.
More info on our website at: tbot.io/referral. Or you can email us at: referrals@thoughtbot.com with any questions.
448 епізодів
ทุกตอน
×Ласкаво просимо до Player FM!
Player FM сканує Інтернет для отримання високоякісних подкастів, щоб ви могли насолоджуватися ними зараз. Це найкращий додаток для подкастів, який працює на Android, iPhone і веб-сторінці. Реєстрація для синхронізації підписок між пристроями.