Jump to content

Reliable way to debug / lint test script filter feedback?


Recommended Posts

Well, I had code that gave me the following output in terminal;

 

<?xml version="1.0"?>
<items>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/actor-syntax-from-scratch" valid="yes">
    <title>Actor Syntax From Scratch</title>
    <subtitle>In this screencast, we use Ruby's remarkable flexibility to actually implement the theoretical actor syntax shown in <a href="https://www.destroyallsoftware.com/screencasts/catalog/separating-arrangement-and-work">"Separating Arrangement and Work"</a>. We start at the theoretical syntax, then work forward to a working system: get it to parse; get it to run; and then get it to do the work that we want it to do. This highlights the flexibility of Ruby, as well as serving as a simple explanation of the actor model.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/analyzing-context-switches" valid="yes">
    <title>Analyzing Context Switches</title>
    <subtitle>When analyzing a system's behavior or performance, there's a huge spectrum of tools available: everything from the load average&mdash;which boils a machine down to a single number&mdash;to DTrace and STrace, which can provide very fine-grained information. We'll briefly look at the load average and why it's not sufficient. Then we'll look at one particular method that's in between the two extremes: the /usr/bin/time command, which can report many statistics about program execution. We'll use it to compute the number of involuntary context switches in each of Destroy All Software's Cucumber scenarios, giving us a starting point for localizing an anomaly in the system's behavior.

Note: Some questions have been raised about the exact details of load average accounting. Unfortunately, I'm not an expert on Linux kernel internals, so I'm going to resist the urge to try to correct myself. To be brief: I may have significantly overstated the impact of IO on load average accounting. In any case, the larger point is unaffected: load average is at one extreme of the continuum of performance analysis tools and, once you get past the first few seconds of performance analysis, you'll need to dig deeper.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/a-bit-of-c" valid="yes">
    <title>A Bit of C</title>
    <subtitle>Most of the software running on our machines is written in C&mdash;the operating system, our VMs and compilers, the Unix shell and its various utilities, and our editors. This screencast briefly introduces a project written in C, focusing on its unit tests and the ways in which its design is similar to OO. This is not a tutorial on C&mdash;programming in it effectively requires a lot of learning. But, as demonstrated here, many ideas and practices used in more modern languages do apply directly to programming in C.

Note: there's at least one pointer bug visible in the code shown here ("trie_create" incorrectly zeroes the "values" field instead of the entire trie). Thanks to <a href="https://twitter.com/cassarani">Leo Cassarani</a> for <a href="https://twitter.com/cassarani/status/287191713334845440">pointing this out</a>.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/test-cases-vs-examples" valid="yes">
    <title>Test Cases vs. Examples</title>
    <subtitle>Before BDD and tools like RSpec took off, tests were often written in a "test case" style: they were phrased in the computer's terms. Well-written RSpec usually approaches testing from the human direction: instead of focusing on the software's terminology, the human-visible behavior is specified in English, and the examples map those English descriptions onto software terminology. In this screencast we'll refactor part of <a href="https://github.com/harukizaemon/hamster">Hamster</a>'s test suite, translating it from a test case style to an example style. This will require many trade-offs, most notably trading completeness of test coverage in some corner cases for readability of test names.

Note: There's a bug in the mutation test that I missed during recording. Because RSpec's "let" variables are memoized, the "empty" value is only computed once. If it were mutated, both references to "empty" would point to the mutated value, defeating the test. As pointed out by <a href="https://twitter.com/myronmarston">Myron Marston</a>, the test would <a href="https://gist.github.com/4432380">even pass</a> for Ruby's Array class, which clearly mutates. Unfortunately, mistakes like this are possible when validating a test by breaking the test itself, rather than by breaking the production code.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/debugging-with-tests" valid="yes">
    <title>Debugging With Tests</title>
    <subtitle>We'll start by translating a bug report into a test, providing an objective first-pass validation of any fix we come up with. Then we'll push down into the system using tests as a guide: each test we write will be smaller and more focused. To simulate ignorance of the system, we'll avoid looking at production code until we've gotten to the very bottom of the stack. To simulate a complex bug where the stack trace doesn't indicate the full subtlety of interactions, we'll push down one step at a time instead of simply jumping to the deepest part of the stack. When we get down to the defect itself, we can then run the tests we generated in reverse order, "popping stack" back to the system-level view.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/imperative-to-oo-to-functional" valid="yes">
    <title>Imperative to OO to Functional</title>
    <subtitle>This screencast demonstrates a refactoring through three and a half paradigms. First, we see the code in imperative form: code that mutates data, with the code and data being separated. Then we merge some of the data and code to form an object to get object oriented code: code and data mixed, with mutation. We quickly look at a variant of this where the object is only allowed to have pure functions (no mutation or IO). Finally, we remove the object, leaving only the functions, which gives us a more standard functional solution.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/isolating-by-separating-value" valid="yes">
    <title>Isolating by Separating Value</title>
    <subtitle>This screencast presents a method for writing isolated tests without using stubs or mocks. We'll explicitly separate the value part of an object&mdash;its instance variables&mdash;from the behavior part&mdash;its methods. Then, when testing other classes, we can integrate them only with the value part, as exposed by the accessor methods.

We avoid the danger of mocks and stubs going out of sync with the code being tested, since we're integrating with real accessor methods that will exist in the final object. We also avoid the danger of accidentally calling complex methods that shouldn't be under test: since we only test against the data part of the object, there's no risk of integration.

This method isn't universal. It falls flat on objects that are heavy on behavior and light on data. But it is one way to test against commonly-used, data-heavy classes (in the case of the Destroy All Software codebase that we work on here, the Screencast class).</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/primitive-obsession" valid="yes">
    <title>Primitive Obsession</title>
    <subtitle>Primitive obsession is the use of primitive values&mdash;integers, strings, arrays, hashes, etc.&mdash;when a more specialized, domain-relevant object would provide a better design. Rather than discuss the idea abstractly, this screencast is a concrete example: we examine Destroy All Software's Screencast class, then replace it throughout the system with a simple hash. At the end, we review the changes to get a sense of what primitive obsession does to a design.

Note: As mentioned in the screencast, no tests are run or touched. At over 15 minutes long, this screencast is well on the high end of DAS lengths and test maintenance would've increased that. As a result, at least one mistake is made: the Screencast.slug method should've taken a screencast and computed the slug from it. This doesn't impact the design analysis, but certainly reaffirms the importance of testing.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/separating-arrangement-and-work" valid="yes">
    <title>Separating Arrangement and Work</title>
    <subtitle>Hard coupling&mdash;putting the name of one class inside of another&mdash;is a problem for both design and testing. It makes adjusting the objects' boundaries harder because the static names must be changed and different dependencies can't be swapped in for those hard-coupled points. It makes testing harder because you can't test one object without it invoking the other, so focusing a test is difficult. Dependency injection can help with this, but it's really a special case of a more general principle: separating the arrangement of a program's pieces from the work that they actually do.

This screencast is an example of separating arrangement of work, but not by dependency injection. Instead, we separate the data flow between objects from the objects themselves, which eventually allows us to convert to a simple actor-based concurrency model in one smooth transition.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/where-correctness-is-enforced" valid="yes">
    <title>Where Correctness Is Enforced</title>
    <subtitle>Imagining the most naive possible Rails app, it would probably do a lot of data validation at the controller level. We don't do that, of course: we push the data integrity responsibility down to ActiveRecord, via validations. Unfortunately, ActiveRecord validations aren't good enough either: there are a few ways that they can be sidestepped, and those ways will eventually come up in many real-world apps. This screencast looks at those sidestepping mechanisms, the problems they create, and how to solve them by using real database constraints.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/python-vs-ruby-objects" valid="yes">
    <title>Python vs. Ruby Objects</title>
    <subtitle>Many people assume that Python and Ruby have similar object systems. That's sort of true, in that they have roughly the same level of dynamic expressiveness, but the way that they achieve it is actually quite different. We'll compare the two systems, focusing on one very fundamental division between them: Python deals with attributes, but Ruby deals with methods, with each implementing one in terms of the other. This leads to some characteristic properties of the language: Python's consistency and focus on correctness vs. Ruby's terseness for defining and calling methods.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/removing-a-rubinius-feature" valid="yes">
    <title>Removing a Rubinius Feature</title>
    <subtitle>Rubinius is a Ruby implementation known for being "written in Ruby", although that's not entirely true since it does have a large VM written in C++. We'll start off by briefly looking at the structure of Rubinius, focusing on the load order and object system bootstrapping. Then we'll remove a feature from it by updating all of the source files that reference that feature, then verifying the change both by running the tests and by visual inspection.

Note: At the very beginning, I misspeak and say that Class's superclass is Class. This isn't true: Class's superclass is Module, and the class of Class is Class, which is what's being set up in the line in question. Object system bootstrapping is hard to keep straight!</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/splitting-active-record-models" valid="yes">
    <title>Splitting Active Record Models</title>
    <subtitle>This is a continuation of the <a href="https://www.destroyallsoftware.com/screencasts/catalog/collapsing-services-into-values">previous</a> screencast, "Collapsing Services into Values". We'll transform the Subscription value object we created into a database table and ActiveRecord model.

Although creating the value object in the last screencast did clean up the system, it still left the User class with a lot of knowledge about subscriptions. User still contained their validations, and in a complete system it would have knowledge about how to create subscriptions from itself and update itself from subscriptions. When we extract the subscriptions into their own table and model, this knowledge disappears from User entirely, although it does re-raise the question we started with: where should the logic go?</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/collapsing-services-into-values" valid="yes">
    <title>Collapsing Services Into Values</title>
    <subtitle>In the "What Goes in Active Records" series (<a href="https://www.destroyallsoftware.com/screencasts/catalog/what-goes-in-active-records">part 1</a> and <a href="https://www.destroyallsoftware.com/screencasts/catalog/what-goes-in-active-records-part-2">part 2</a>), we looked at some design constraints for what goes in ActiveRecord models. Sometimes, these constraints can lead to very small classes being extracted, which often feels awkward.

This screencast looks at one such class: a single line of service code with an alarmingly long test file. By creating a new value class and tightening the service's interface around it, we shorten the tests slightly. Then, by collapsing the service into the new value class, we shorten them even more. We're left with tests that are easier to reason about, and with a new abstraction reified in the code.

Finally, although this isn't stated in the screencast, the tests are fully isolated from Rails after this refactoring, whereas the original tests integrated with it. The new tests are about eight times faster (and will remain fast even as the application grows).</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/functional-core-imperative-shell" valid="yes">
    <title>Functional Core, Imperative Shell</title>
    <subtitle>Purely functional code makes some things easier to understand: because values don't change, you can call functions and know that only their return value matters&mdash;they don't change anything outside themselves. But this makes many real-world applications difficult: how do you write to a database, or to the screen?

In this screencast we look at one method for crossing this divide. We review a Twitter client whose core is functional: managing tweets, syncing timelines to incoming Twitter API data, remembering cursor positions within the tweet list, and rendering tweets to text for display. This functional core is surrounded by a shell of imperative code: it manipulates stdin, stdout, the database, and the network, all based on values produced by the functional core.

This design has many nice side effects. For example, testing the functional pieces is very easy, and it often naturally allows isolated testing with no test doubles. It also leads to an imperative shell with few conditionals, making reasoning about the program's state over time much easier.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/test-isolation-without-mocks" valid="yes">
    <title>Test Isolation Without Mocks</title>
    <subtitle>In this screencast we TDD the same code twice: once in the traditional, imperative OO way with mutation; then again in a functional way by returning a value.

We look at several differences between the two implementations, but the most interesting is in the way that they're isolated. Though both are isolated against external behavior, only the OO version requires mocks. The functional version achieves isolation by taking a value in (a Tweet value object) and return a value out (an array of strings to be rendered). This saves us from the danger of mocked methods going out of sync.

Note 1: For realism, the tests are written in exactly the way that I naturally wrote them on my first practice run. This does make the OO version slightly more complicated; the mocking could be simplified but, of course, it can't be removed. Additionally, since this is how I wrote it originally, it shows how easy it is to introduce accidental complexity when mocking in tests.

Note 2: The magic numbers that I claim have "disappeared" in the functional version are still present in the padding numbers (" " * 16, etc.) This was simply sloppy language on my part; I should've said that the tests and production code together remove the magic from the numbers. The production code contains the details, whereas the tests give a clear high-level view of what's happening.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/time-to-first-request" valid="yes">
    <title>Time to First Request</title>
    <subtitle>This time we build a script that can run a process, wait for it to start listening on a socket, and then kill it. This is useful for benchmarking a web framework's (or even a web app's) startup times. We also compensate for error in the measurements due to cold cache effects and due to constant costs introduced by our script itself. The script is <a href="https://github.com/garybernhardt/destroy-all-software-extras/blob/master/das-0070-time-to-first-request/time_to_first_request.sh">available</a> for download if you'd like to use it.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/conditional-whac-a-mole" valid="yes">
    <title>Conditional Whac-A-Mole</title>
    <subtitle>Removing conditionals can sometimes reduce complexity. However, the benefits aren't so obvious in this example. We'll replace a conditional five times, each using a different language feature, and see how it impacts the understandability of the code.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/test-driving-shell-scripts" valid="yes">
    <title>Test Driving Shell Scripts</title>
    <subtitle>To show that Bash really is a full programming language, let's test-drive a shell script. We'll have all of the familiar tools from xUnit style testing tools, like setUp methods and assertions.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/the-mock-obsession-problem" valid="yes">
    <title>The Mock Obsession Problem</title>
    <subtitle>When first learning about mocks and isolation, it's tempting to overdo it, and I did. We'll look at a test that I wrote several years ago and examine its obsession with mocking. As we do that, we'll refactor it to be simpler, more direct, and with less indirection via mocks.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/ugly-tests-trigger-refactoring" valid="yes">
    <title>Ugly Tests Trigger Refactoring</title>
    <subtitle>Paying close attention to your tests can highlight design problems in production code that you might not notice otherwise. Here, we'll look at an example that occurred during the development of <a href="https://github.com/garybernhardt/raptor">Raptor</a>: the tests contain deep stubs, irrelevant names, and a rectangular shape, all of which point to a design problem.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/a-magical-isolation-story" valid="yes">
    <title>A Magical Isolation Story</title>
    <subtitle>Python's namespacing and module system make it possible for a testing tool to enforce test isolation automatically. We'll test drive such a tool from scratch, though it will be a rough implementation (it won't undo its changes after tests complete, for example). This will give us the chance to see several features of Python that are missing in Ruby.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/mutation-in-tell-dont-ask" valid="yes">
    <title>Mutation in Tell Don't Ask</title>
    <subtitle>The Tell Don't Ask principle tells us not to manipulate objects conditionally based on their state; instead, the knowledge of when to manipulate belongs in the object. That sounds like a rule about mutability, but it's not: it applies just as well when objects are immutable. This screencast will look at an example of "asking", convert it to a "tell", and examine the role (or lack thereof) of mutation.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/pretty-git-logs" valid="yes">
    <title>Pretty Git Logs</title>
    <subtitle>In this screencast we'll derive my git log format from scratch. It has one line per commit without sacrificing detail, with each field colorized and aligned in its own column. Along the way, we'll see a couple of command line tricks&mdash;column and various arguments to less&mdash;that haven't appeared in a screencast before. My <a href="https://github.com/garybernhardt/dotfiles/blob/master/.gitconfig">gitconfig</a> and <a href="https://github.com/garybernhardt/dotfiles/blob/master/.githelpers">githelpers</a> files are both available on GitHub.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/three-test-shapes" valid="yes">
    <title>Three Test Shapes</title>
    <subtitle>When keeping tests small, they tend to repeatedly form a few consistent shapes. This time, we'll look at the three main test shapes that are relevant to mutability: immutability, local mutation, and global mutation (or, equivalently: nondestructive, locally destructive, and globally destructive). Then, we'll look at three ways that these tests can go wrong by being implementation-obsessed.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/pushing-complexity-down" valid="yes">
    <title>Pushing Complexity Down</title>
    <subtitle>Pushing complexity down to lower levels is a common refactoring. We'll TDD a small class to work on, then look at two ways to push a particular conditional down to a lower level. One of them will result in a much better design than the other. This shows that merely pushing down isn't enough; sometimes it's simply redrawing the lines in the same bad design.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/the-vimrc" valid="yes">
    <title>The .vimrc</title>
    <subtitle>By popular demand, we'll talk a trip through my .vimrc file&mdash;not a line-by-line examination, but a look at the most interesting parts that you might want to steal. We'll see tab key overloading, customizations to ease the rougher edges of Ruby syntax, and my system for running only the tests that are needed. My <a href="https://github.com/garybernhardt/dotfiles/blob/master/.vimrc">vimrc</a> is on GitHub, as is the test running <a href="https://github.com/skalnik/vim-vroom">plugin</a> that was mentioned.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/when-to-generalize-in-tdd" valid="yes">
    <title>When to Generalize in TDD</title>
    <subtitle>When a TDDed test fails, you can often make it pass by "sliming": making the method in question return a hard-coded value, instead of computing the answer in the "right" way. When you write that slimed code, when should you generalize to the full implementation, and when should you write another test to force the generalization? We'll look at some specific cases where I jump straight to the generalization.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/sucks-rocks-8-the-whole-design" valid="yes">
    <title>Sucks/Rocks 8: The Whole Design</title>
    <subtitle>In the last part of the series, we start by elevating the Cucumber features to run against the full stack, not just the service layer. Then, we step back and look at the whole design: how could we have handled NoScore without a sentinel? What was the result of the obsession with avoiding nil? And how would the design allow us to easily switch to a new search engine? The full source of Sucks/Rocks is available <a href="https://github.com/garybernhardt/sucks-rocks">on GitHub</a>.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/sucks-rocks-7-more-cucumber" valid="yes">
    <title>Sucks/Rocks 7: More Cucumber</title>
    <subtitle>In the previous part of this series, we did some exploratory testing to ensure that the controller was working. To avoid repeating that work in the future, we'll automate it by writing more Cucumber tests. In the process, we'll find and fix yet another bug.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/sucks-rocks-6-a-controller" valid="yes">
    <title>Sucks/Rocks 6: a Controller</title>
    <subtitle>Finally, we actually serve the new Sucks/Rocks app as a web site. Using the old static assets, we introduce a tiny controller method to wire our services up to to Rails. This ends up revealing a bug, which we write a test for and fix.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/sucks-rocks-5-a-bug-and-a-model" valid="yes">
    <title>Sucks/Rocks 5: a Bug and a Model</title>
    <subtitle>In part five, we go back and fix the bug that we found in part three. Then, we complete the caching layer by pushing down to the ActiveRecord model. Finally, five screencasts in, we introduce a database schema and use Rails.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/sucks-rocks-4-caching" valid="yes">
    <title>Sucks/Rocks 4: Caching</title>
    <subtitle>We now add a caching layer to Sucks/Rocks. It's another plain old Ruby object that uses the RockScore service. This is part four of the series, and in it we add our third class, but we still don't need any Rails at all!</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/sucks-rocks-3-the-search-engine" valid="yes">
    <title>Sucks/Rocks 3: The Search Engine</title>
    <subtitle>In part three, we integrate Bing as our search engine source, using VCR for record/playback of interactions when testing it. We also get our first passing Cucumber scenario, but find a bug in the process.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/sucks-rocks-2-computing-scores" valid="yes">
    <title>Sucks/Rocks 2: Computing Scores</title>
    <subtitle>In part two, we begin the unit-level TDD for Sucks/Rocks. I try to explain every little decision I make as I'm TDDing: how the examples are chosen, when to generalize, and how to force the code to return a specific data type.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/sucks-rocks-1-the-rails-app" valid="yes">
    <title>Sucks/Rocks 1: The Rails App</title>
    <subtitle>In this series, we'll rebuild <a href="http://sucks-rocks.com">Sucks/Rocks</a> from scratch. It's currently broken because the services it relies on have been discontinued. We'll rebuild it as a Rails app using both TDD loops (acceptance and unit), and using many of the design principles discussed in earlier Destroy All Software screencasts.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/shorter-class-syntax" valid="yes">
    <title>Shorter Class Syntax</title>
    <subtitle>How terse can Ruby class definition get without changing the language or impacting readability? We'll give it a shot in this screencast, finding it surprisingly easy to turn six lines of declaration into two. The resulting helper code has been cleaned up, had its monkey patches removed, and is available <a href="https://github.com/garybernhardt/cls">as a gem</a>.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/generating-coupons-with-bash" valid="yes">
    <title>Generating Coupons With Bash</title>
    <subtitle>Destroy All Software coupon codes are each composed of three random Unix words, like "ls ruby fi". We'll build the coupon generation script from scratch. It starts with the list of man pages on the host system and turns them into random three-word coupon codes. Although the script is fundamentally one long chain of pipes, we'll take care to keep the names and structure readable throughout.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/repository-statistics-in-raptor" valid="yes">
    <title>Repository Statistics in Raptor</title>
    <subtitle>We revisit an old topic: computing statistics over a repository. This time, we have a concrete example taken from the Raptor web framework, which has its own statistics script that results in very detailed plots. Along the way, we'll see some shell details, including some confusing behavior from the `time` builtin. Raptor's <a href="https://github.com/garybernhardt/raptor/blob/master/script/statistics">statistics script</a> and the <a href="https://github.com/garybernhardt/dotfiles/blob/master/bin/run-command-on-git-revisions">run-command-on-git-revisions</a> script are both available if you'd like to try them out.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/brittle-and-fragile-tests" valid="yes">
    <title>Brittle and Fragile Tests</title>
    <subtitle>The terms "fragility" and "brittleness" get thrown around: proponents of integration tests claim that mocking is fragile; proponents of mocking claim the opposite. It turns out that they're both right, and we'll look at why, then end with a look at an arguably incorrect use of the terms as applied to truly isolated tests.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/stubbing-unloaded-dependencies" valid="yes">
    <title>Stubbing Unloaded Dependencies</title>
    <subtitle>Writing fast tests often means testing without loading the rest of the application. When you want to stub a method on a class that isn't loaded, how do you do it? There are many ways, and here we'll quickly look at five of them, four of which have no test performance impact at all.

Note: After the publiciation of this screencast, RSpec gained a "stub_const" function. Using this is generally easier than creating an empty class, but provides similar results. The trade-offs are slightly different from the empty class approach, but almost all of the disussion in this screencast holds.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/emacs-chainsaw-of-chainsaws" valid="yes">
    <title>Emacs, Chainsaw of Chainsaws</title>
    <subtitle>Most Destroy All Software screencasts have used Vim, so let's take a moment to appreciate Emacs, the other "One True Editor". We'll look at some of my customizations from my past as an Emacs user, as well as some of the features I miss when I'm in Vim. (After publication, <a href="http://avdi.org">Avdi Grimm</a> reminded me of <a href="https://gitorious.org/evil/pages/Home">Evil</a>, which you may want to try for better Vim emulation in Emacs.)</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/untested-code-part-4-refactoring-2" valid="yes">
    <title>Untested Code Part 4: Refactoring 2</title>
    <subtitle>In the final part of this series, we pull a large piece of code out of the controller, moving it into its own class. That class is tested in isolation by using the controller's integration tests as a guide. While doing this, one of the tests stands out with very complex stubbing, so we make a small design change to simplify it. Finally, we step back and look at the final suite of tests we've created.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/untested-code-part-3-refactoring-1" valid="yes">
    <title>Untested Code Part 3: Refactoring 1</title>
    <subtitle>Now that we have tests, we can finally refactor! We'll do some minor cleaning on the structure of the controller action, and then extract some model logic into a new model method. During the process, we'll disentangle the book-finding logic from the book-adding logic. This will allow us to extract the book finding logic in part 4, reducing the controller to very little code.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/untested-code-part-2-adding-tests" valid="yes">
    <title>Untested Code Part 2: Adding Tests</title>
    <subtitle>In part 2 of this series, we write actual tests for the context structure we discovered in part 1. Along the way, we'll verify that each test is actually testing something by breaking the code in a very small way to see it fail. (Ideally, this would cause an assertion failure each time; some erroring tests are allowed to slip by for speed's sake.) (The respond_to block used in this screencast could be replaced with `post :foo, :id => 12, :format => :js` in the test, leaving the production code unchanged.)</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/untested-code-part-1-introduction" valid="yes">
    <title>Untested Code Part 1: Introduction</title>
    <subtitle>This is part one of a three part series on dealing with legacy code. We'll start with a completely untested Rails controller, put tests around it that cover all of the cases, and then extract pieces of the code safely using the tests, while simultaneously pushing the tests down to lower levels of isolation. In this screencast, we introduce the code and try to create an exhaustive list of pending RSpec contexts and examples.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/web-apps-when-to-test-in-isolation" valid="yes">
    <title>Web Apps: When to Test in Isolation</title>
    <subtitle>With 40 screencasts in the catalog, many of which discuss testing, we now have enough context to talk about when to test in isolation and when to integrate. It comes down to one main question: who owns the interfaces you depend on? We'll go through the major components in a modern web app, looking at why each can be tested in isolation or not, or why they're somewhere between those extremes, and see exactly why each one falls where it does.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/outside-in-tdd-stubs-vs-stash" valid="yes">
    <title>Outside-in TDD: Stubs vs. Stash</title>
    <subtitle>When doing TDD from the outside in, stubs are the norm. We build the outermost class first, stubbing its dependencies, which may not even exist yet. Those stubs tell us what the interface of the next layer down should be; this is how TDD drives the design. When we're not isolating our tests, though, we can't do this. We can still start at the top, but we can't make the test pass without an actual implementation. This can lead to large, unwieldy commits. We'll look at how to avoid those large commits using the git stash, and then compare the results of outside-in TDD with stubs vs. outside-in TDD with the stash.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/tdding-spikes-away-with-rebase" valid="yes">
    <title>TDDing Spikes Away With Rebase</title>
    <subtitle>A spike is a small, disposable experiment in code. After learning about your problem or solution via the spike, you throw it away and rewrite the code using TDD. Here, we look at a process for doing that incrementally, using the spike as a guide and git's rebase functionality as the means. It's only appropriate when the spiked code is heavily constrained by external interfaces. But, in that case, it can guide you through tricky third party interactions. Because this is a subtle topic, the screencast is necessarily demonstrating a simplified form: the spiked and TDDed code are identical. When doing this in practice, the rebases will result in nontrivial conflicts since the implementations won't be identical. The more dissimilar they are, the closer you are to true test driven design, and the less useful this technique is.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/what-goes-in-active-records-part-2" valid="yes">
    <title>What Goes in Active Records Part 2</title>
    <subtitle>In the second part of this series, we'll actually remove the various parts of the model that don't belong, as shown in the part 1. Most of the time will be spent removing the two ActiveRecord callbacks, replacing them with a class that sits between the controller and model, mediating the lifecycle of the User and Braintree API objects. We'll also briefly replace the other two methods with implementations outside the ActiveRecord class.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/what-goes-in-active-records" valid="yes">
    <title>What Goes in Active Records</title>
    <subtitle>Several screencasts have talked about moving logic out of models and into naked Ruby classes, but what are the things we should leave behind in the ActiveRecord models? That's what we'll address here: a walk through the types of methods that belong on AR models, and why they belong there.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/which-tests-to-write" valid="yes">
    <title>Which Tests to Write</title>
    <subtitle>In this screencast, we'll revisit the example from "<a href="/screencasts/catalog/performance-of-different-test-sizes">Performance of Different Test Sizes</a>", where we wrote similar pairs of tests at four different levels. We'll go through each level, asking which of those tests we should keep, and why. Along the way, we'll compare the layers a test claims to test to the layers it actually interacts with, and see how that indicates the quality of the test.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/splitting-into-fine-grained-tests" valid="yes">
    <title>Splitting Into Fine Grained Tests</title>
    <subtitle>The bigger a single test is, the worse the feedback. We'll look at a test that's already good but can be split further, comparing the failure patterns it generates before and after splitting. By turning one test into three, we'll be able to understand the failures simply by looking at the test names, instead of having to analyze the actual assertion failures.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/simple-bash-script-testing" valid="yes">
    <title>Simple Bash Script Testing</title>
    <subtitle>Cowboying a shell script is fun, but how do we test it? We'll look at a basic method in this screencast, using nothing except standard shell tools. In the process, we'll also see a simple method for using a git repository as a fixture for testing a tool that operates on it. Everything shown is Bash-compatible, though the screencast is a mix of Bash and Zsh.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/performance-of-different-test-sizes" valid="yes">
    <title>Performance of Different Test Sizes</title>
    <subtitle>This time, we analyze the execution time of testing a small piece of behavior at four different levels: from Cucumber, from the controller, from the view, and from an isolated Ruby class. This lets us quantify the performance benefit of fine-grained testing and make more objective decisions about what should be tested at a given level.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/history-spelunking-with-unix" valid="yes">
    <title>History Spelunking With Unix</title>
    <subtitle>In this screencast, we once again analyze the history of a git repository. This time, though, we go further: we first generate a chart showing test runtimes across revisions, using only the command line. We then focus on a sudden change in runtime that the chart reveals, repurposing git bisect to make git find the commit that caused the change automatically.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/some-vim-tips" valid="yes">
    <title>Some Vim Tips</title>
    <subtitle>This is a screencast full of tips for learning Vim. They range from introductory (how do I learn to use Vim effectively?) to specific questions I'm asked (what plugins do I use?) to advanced (how should I guide my use of plugins to maximize speed?), so there should be something for everyone. We also touch on color schemes a couple times: both my grb256 color scheme based on ir_black (available <a href="https://github.com/garybernhardt/destroy-all-software-extras/tree/master/das-0030-some-vim-tips">on GitHub</a>) and <a href="http://ethanschoonover.com/solarized">Solarized</a>, which is a more subtle choice.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/extracting-from-models" valid="yes">
    <title>Extracting From Models</title>
    <subtitle>Returning to ChiliProject, we now extract some application logic from a model into some naked classes in lib. This results in removing code from the model, centralizing knowledge, eliminating duplication, and providing points for reuse. A method marked "This method [...] is to be kept as is" even gets refactored. (I'm sure it'll be fine!)</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/acceptance-tests" valid="yes">
    <title>Acceptance Tests</title>
    <subtitle>In this screencast, we look at Cucumber for writing high-level acceptance tests with examples taken from Destroy All Software's Cucumber suite. We'll touch on step naming and the abstract/detailed split between features and steps, as well as some performance tips and notes about browser engines.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/extracting-from-controller-to-model" valid="yes">
    <title>Extracting From Controller to Model</title>
    <subtitle>This is a follow-up to the two-part controller refactoring series. Here, we step outside the controller itself, moving small pieces of model querying and logic into the model. This clarifies the controller's intent, provides points for reuse, and simplifies testing of both the controller and the newly-extracted model methods.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/controller-refactoring-demo-part-2" valid="yes">
    <title>Controller Refactoring Demo Part 2</title>
    <subtitle>This is the second of two screencasts showing a live refactoring of a large controller method. In this half, we try to improve the names of all of our new methods by at least a little bit. Along the way, we extract some more fine-grained methods. At the end, we look at how breaking this class down makes it easy to move code out of the controller and into other classes. The final version of the class is <a href="https://github.com/garybernhardt/destroy-all-software-extras/blob/master/das-0026-controller-refactoring-demo-part-2/account_controller.rb">available</a> if you'd like to look over it.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/controller-refactoring-demo-part-1" valid="yes">
    <title>Controller Refactoring Demo Part 1</title>
    <subtitle>This is the first of two screencasts showing a live refactoring of a large controller method. In the first half, we break the method down into smaller methods so that we can understand it better. In the second half, we'll reason about those small pieces, find good names for them, and clarify. The end goals are small, understandable methods; better names; reduced early returns, conditionals, and other control structures; and better clarification of intent.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/notes-on-stubbing" valid="yes">
    <title>Notes on Stubbing</title>
    <subtitle>In this screencast we look just at stubbing: not other types of test doubles, and not when to use a test double or not. We'll hit three stub-specific topics: the difference between incidental and essential interactions; a method for testing mix-ins without depending on a class that mixes them in; and creating more focused test examples by pulling out common stubs and mutating them for each example.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/spiking-and-continuous-spiking" valid="yes">
    <title>Spiking and Continuous Spiking</title>
    <subtitle>To spike code, you stop doing TDD, throw some code together without tests to learn something, then delete the code and do it the right way. In this screencast, we'll look at spiking, and specifically the idea of "continuous spiking": instead of throwing the code away, transitioning it into TDDed production code iteratively. It's a dangerous practice, but doing it with care can help you through unclear parts of your application development.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/test-isolation-and-refactoring" valid="yes">
    <title>Test Isolation and Refactoring</title>
    <subtitle>Isolated unit tests have many benefits, but one drawback is a loss of confidence in the integrated system. At high levels of isolation, you lack a feedback mechanism for learning that the pieces don't actually work together: for example, they call the wrong methods, or call them with the wrong number of arguments. This can make refactoring with isolated tests scary. In this screencast, we'll look at the technique I use as a first line of defense. It's a hybrid between fully isolated unit testing and slow, expensive integration testing.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/coupling-and-abstraction" valid="yes">
    <title>Coupling and Abstraction</title>
    <subtitle>Coupling and abstraction are closely related: by introducing a method to reify an abstraction, you often naturally decrease the coupling between two classes. We'll explore this with a simple model/controller example, as well as a case where the situation isn't so straightforward: a method call that looks like a good, simple abstraction but is actually dangerous because it's third-party. Finally, we'll see the way that isolated, outside-in TDD naturally encourages you to build good abstractions early.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/tar-fork-and-the-tar-pipe" valid="yes">
    <title>Tar, Fork, and the Tar Pipe</title>
    <subtitle>The tar pipe is my favorite Unix command. It combines several important Unix concepts around files, subprocesses, and interprocess communication in just a few characters. We'll look at the tar pipe and what it does, then dive down into what the shell is doing to make it work, including forking and the creation of pipes. We'll also look at some raw tar data, which you rarely see in the wild.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/composing-a-unix-command-line" valid="yes">
    <title>Composing a Unix Command Line</title>
    <subtitle>We've seen a lot of Unix commands, but never stopped to talk specifically about building a large command. That's what we'll do in this screencast: we'll solve a problem using a large one-off command, but the goal is to think about the command itself. Along the way, we'll see most of the utilities you need to do text processing in Unix. If you learn each of these, you'll be able to manipulate text streams quite well.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/clarity-via-isolated-tests" valid="yes">
    <title>Clarity via Isolated Tests</title>
    <subtitle>Several Destroy All Software screencasts have touched on isolated testing, but never addressed it directly for its own sake. That's the topic of this screencast: why do we care about isolating the class under test from other classes? We'll look at an actual example I ran into: I started TDDing a class, letting it integrate with some other simple classes. After realizing that the test was becoming a mess, I deleted it, rewrote it in an isolated way, and it was far more readable. We'll retrace those steps to see the dramatic difference isolated testing made.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/wrapping-third-party-apis" valid="yes">
    <title>Wrapping Third Party APIs</title>
    <subtitle>Third party APIs can be a source of bad design in your applications. When you mix your application's logic with calls into an API, you're obscuring both responsibilities. In this screencast, we'll look at a class where I've done just that. Then we'll extract the API access out into a wrapper, simplifying the original class and adding clarity to both the production code and the tests.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/a-refactoring-story" valid="yes">
    <title>A Refactoring Story</title>
    <subtitle>While thinking about this week's screencast, I happened to do a pretty big refactoring on one of Destroy All Software's controllers. I translated a confusing mess of exception rescuing into a more sensible action, pushing validation and special cases down into lower-level classes. We'll look at the before picture, the changes I made, and then the after picture. There are naming changes, structural changes, and some important Rails behavior that was a little surprising. This is a departure from the normal Destroy All Software style, so please <a href="mailto:support@destroyallsoftware.com">let us know what you think</a>!</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/quick-and-easy-perf-tests" valid="yes">
    <title>Quick and Easy Perf Tests</title>
    <subtitle>Writing traditional arrange-act-assert tests for performance is difficult: what do you assert on? In this screencast, we'll look at a simple method for doing ongoing performance analysis: running small benchmarks across the commits in version control. With a few lines of shell scripts and RSpec, we can get a visual sense of our system's performance over time, allowing us to catch performance problems before they make it to production. The run-command-on-git-revisions script used here is available <a href="https://github.com/garybernhardt/dotfiles/blob/master/bin/run-command-on-git-revisions">on GitHub</a>.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/extracting-objects-in-django" valid="yes">
    <title>Extracting Objects in Django</title>
    <subtitle>This is a sequel to the original <a href="/screencasts/catalog/extracting-domain-objects">Extracting Domain Objects</a> screencast. Once again, we'll look at Destroy All Software's catalog logic, pulling it out of the Django View (equivalent to a Rails controller), and moving it into its own domain-relevant class. Then we'll isolate and simplify the tests for both the view and the new Catalog class. Finally, we'll use the newly isolated tests to extend the behavior of the Catalog without changing or breaking the view and its tests.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/file-navigation-in-vim" valid="yes">
    <title>File Navigation in Vim</title>
    <subtitle>Vim's file navigation features are weak, so customization can speed you up a lot. We'll cover finding and opening files, including Rails-specific hacks that help you avoid the problem of finding one file among many with similar names. Then we'll look at the customizations I use to make splitting and window management work well with an outside-in TDD workflow. To try these customizations yourself, see the <a href="/file-navigation-in-vim.html">reference</a> that accompanies this screencast.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/packaging-in-ruby-and-python" valid="yes">
    <title>Packaging in Ruby and Python</title>
    <subtitle>There's been an explosion in packaging tools in the last few years. Creation and installation aren't the hard parts any more: now we're managing multiple runtime versions, isolating package sets for different applications, and specifying dependencies in a repeatable way. This screencast looks at the landscapes in Ruby and Python, comparing their solutions to these problems. A <a href="/ruby-vs-python-packaging-comparison.html">companion table</a> of packaging-related commands in Ruby and Python is available to review the tools used in this screencast.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/git-workflow" valid="yes">
    <title>Git Workflow</title>
    <subtitle>Many people have asked about my git workflow, so here it is. Almost every command I run is an alias of some kind, but I explain them all. We'll go through a cycle of hacking, retroactively splitting commits, running tests over them, fetching from origin, rebasing over it, running tests over the new commits, and finally pushing. You can download <a href="https://github.com/garybernhardt/dotfiles/blob/master/.gitconfig">my .gitconfig</a> and the <a href="https://github.com/garybernhardt/dotfiles/blob/master/bin/run-command-on-git-revisions">run-command-on-git-revisions script</a> to use them yourself. Also see the <a href="/screencasts/catalog/source-code-history-integrity">"Source Code History Integrity"</a> screencast for more on that topic.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/fast-tests-with-and-without-rails" valid="yes">
    <title>Fast Tests With and Without Rails</title>
    <subtitle>Rails' startup can make running tests, and especially doing TDD, painful. You can escape this for most tests by moving code into the lib directory and testing it outside of Rails. We'll look at the performance of tests with and without Rails, as well as how I configure my environment to automatically skip loading it when possible. The script mentioned at the end of this screencast is <a href="https://github.com/garybernhardt/destroy-all-software-extras/blob/master/das-0010-fast-tests-with-and-without-rails/test">available for download</a>.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/exceptions-and-control-flow" valid="yes">
    <title>Exceptions and Control Flow</title>
    <subtitle>The mantra "don't use exceptions for control flow" is repeated often, but its real implications tend to be glossed over. Using an example, I'll show you exactly what I think about it, and why I'm OK with using exceptions in certain cases that some people would dismiss as control flow.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/processes-and-jobs" valid="yes">
    <title>Processes and Jobs</title>
    <subtitle>Processes and jobs&mdash;processes running in your shell&mdash;are the core of a Unix development workflow. For the first half of this screencast, we'll look at the shell's job control mechanisms, how GNU Screen works, and how I use them together for my development workflow. Then we'll look at an advanced use of process management that enables powerful composition of Unix tools beyond simple piping.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/growing-a-test-suite" valid="yes">
    <title>Growing a Test Suite</title>
    <subtitle>When building a system, it's easy to continually append tests to your suite without considering the relationship between them. By building your test suite more deliberately, you can write clearer, terser tests, and also put better design pressure on your system. We'll build a small piece of code both ways: first by simply appending tests, then by extending existing tests and thinking about the implications.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/conflicting-principles" valid="yes">
    <title>Conflicting Principles</title>
    <subtitle>There are a whole lot of object oriented design principles. It's tempting to view them as absolute rules, but we also need to understand the trade-offs between them. We'll look at a case where The Single Responsibility Principle and Tell Don't Ask conflict, then touch on the grander implications.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/extracting-domain-objects" valid="yes">
    <title>Extracting Domain Objects</title>
    <subtitle>In modern web frameworks, it's easy to extend model and controller objects over and over again, leaving you with huge, unwieldy objects. To avoid this, you can extract small pieces into their own classes. This has many benefits, such as: much faster test execution, naming concepts in the system that were previously implicit, and adding explicit abstraction layers. We'll look at an example from Destroy All Software itself, a Rails app, and pull a piece of model logic embedded in a controller out into its own class with isolated tests.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/source-code-history-integrity" valid="yes">
    <title>Source Code History Integrity</title>
    <subtitle>Source code history is a touchy topic. Should we ever edit history? Is it safe? We'll look at what can go wrong when editing history, and how to avoid the potential problems. We'll also briefly talk about the Mercurial and Git communities. Warning: there's some editorializing!</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/building-rspec-from-scratch" valid="yes">
    <title>Building RSpec From Scratch</title>
    <subtitle>RSpec has lured many programmers into the Ruby world with its beautiful syntax. For some, its working remain a mystery. Let's dispel that. We'll build some of RSpec's basic syntax from scratch, test driving it using Test::Unit. This is done in Ruby, of course. Basic Ruby knowledge will definitely help.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/how-and-why-to-avoid-nil" valid="yes">
    <title>How and Why to Avoid Nil</title>
    <subtitle>We'll look at nil from many angles. Why do nils show up in your programs? What kinds of problems can they cause when they get out of control? How can we design our systems to fail loudly when unexpected nils exist and, more importantly, to avoid the introduction of nils entirely? This screencast uses Ruby, but the techniques apply in any language.</subtitle>
  </item>
  <item uid="topic" arg="https://www.destroyallsoftware.com/screencasts/catalog/statistics-over-git-repositories" valid="yes">
    <title>Statistics Over Git Repositories</title>
    <subtitle>We'll use the shell and the git command line tools to iterate over revisions, computing a statistic for each revision. Initially, it'll be a one-liner at the prompt. Then we'll promote it to a full script, refactor it, and add some more features.</subtitle>
  </item>
</items>

 

but Alfred wouldn't read it in, and without any error messages I can't tell what is broken.

 

I added it as a input script filter /bin/bash with "env ruby rss_transformer.rb | xmllint --format -", but nothing happened when i activated the keyword.

 

Errors would be nice to get. It might be invalid characters that needs escaping for Alfred, but I can't tell.

Link to comment

Not sure what it is because I don't have the rest of your code and/or workflow to look at but the issue isn't your xml. 

 

Typically to test stuff like this, I would create a "Sample" workflow that I put a script filter input into and in the code area enter:

cat << CODE
<copy/paste your xml here>
CODE

 

Then run that workflow. Pasting the provided XML above into that workflow, all your results are returned. So.. I would guess that either you have some other error that you aren't seeing that is getting returned before or after the xml, or.. something. I'm not sure, but the XML is correct.

Link to comment
  • 1 month later...

I think I have the same problem. The XML returned is fine but it won't show any of the results in Alfred, instead it shows the fallback searches... Is this a bug in Alfred?

If it helps: Alfred stops showing the results after a certain amount of characters behind the keyword. Maybe the results aren't offered fast enough (which I doubt) and Alfred rejects them?

Edited by Dexwell
Link to comment

I think I have the same problem. The XML returned is fine but it won't show any of the results in Alfred, instead it shows the fallback searches... Is this a bug in Alfred?

If it helps: Alfred stops showing the results after a certain amount of characters behind the keyword. Maybe the results aren't offered fast enough (which I doubt) and Alfred rejects them?

 

Alfred doesn't stop working after a specific amount of characters or if you type too fast. Perhaps something with the input is causing the script to fail? Try running the code from the command line if available and see if it outputs valid XML there.

Link to comment

Usually when Alfred doesn't show results from some XML, there is a gremlin hidden in the XML which is preventing the parsing from happening.

 

I will add some more debugging options into Alfred for workflows in the future which will help identify what these issues could be :)

Link to comment

How can I run the script in command line? I've already tried writing the XML to a cache file and it's valid there.

 

 

Usually when Alfred doesn't show results from some XML, there is a gremlin hidden in the XML which is preventing the parsing from happening.

 

I will add some more debugging options into Alfred for workflows in the future which will help identify what these issues could be :)

 

Another good thing to note also is, if there is ANYTHING else that is output during the process other than the XML, it breaks Alfred's ability to parse the XML also because it thinks that the other data is part of it.

 

An example of this would be, in my PHP, if I use the date command without first setting the timezone it produces warnings. That would still be picked up by Alfred and he would attempt to parse that. Another example would be debug statements that you might have forgotten in your code.

Link to comment

Ok, so.. there is an error in your code. 

 

1. Double click your script filter

2. Copy all code

3. Click "Open Workflow Folder" button

4. Create a new test.php file in there.

5. Paste your code into it.

6. Change line 4 to be $orig = $argv[1];

7. Open terminal to current location.

8. Run: php -f test.php -- "list of distr"

 

You'll see you are getting an error:

Warning: Invalid argument supplied for foreach() in /Users/ferg/Dropbox/Application Support/Alfred 2/Alfred.alfredpreferences/workflows/user.workflow.446735CD-18D1-4F0A-A618-5309680C2656/test.php on line 11

 

That error also being shown WITH the XML is killing it Alfred is trying to parse that too. 

Link to comment
  • 7 months later...

For anyone looking for another quick way to test results, create a new script filter workflow called "pastetest" or something and have it simply run "pbpaste" (as a bash script).

 

Then you can copy your xml to the clipboard and quickly see if it is parse by Alfred by running this workflow.

Link to comment
  • 3 weeks later...

Usually when Alfred doesn't show results from some XML, there is a gremlin hidden in the XML which is preventing the parsing from happening.

 

I will add some more debugging options into Alfred for workflows in the future which will help identify what these issues could be :)

 

Any news on this front?

 

Trying to figure out what's going wrong is a real PITA because Alfred gives you zero feedback. It just doesn't work. Surely some kind of log file containing errors explaining why a workflow didn't run/its results were rejected must be possible.

 

It would be a massive help to us workflow authors.

Link to comment

Any news on this front?

 

Trying to figure out what's going wrong is a real PITA because Alfred gives you zero feedback. It just doesn't work. Surely some kind of log file containing errors explaining why a workflow didn't run/its results were rejected must be possible.

 

It would be a massive help to us workflow authors.

 

No timescales, but there is still a big number of workflow improvements in the works.

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...