u/Dan-F9

What’s the Most "Un-Hateable" Motorcycle of All Time?
▲ 163 r/Fortnine+1 crossposts

What’s the Most "Un-Hateable" Motorcycle of All Time?

This week, it's a community post, though the topic does tie into rider psychology in a way I'll outline briefly.

Forums are littered with "most hated" topics, and it's quite engaging to talk about the things we all commonly dislike, since the effect this produces seems to draw us closer rather than divide us. Out of curiosity, I thought: Would the opposite do the same?

I have a feeling that it's easier to dislike a certain machine based on its obvious flaws, but can we, as a community, come to some consensus about which motorcycle is flat-out impossible to talk ill of? Is there such a machine, one that transcends style and time, with little to no haters?

While there might not be just 1 right answer, I do think that the qualities tied to such a classification will bring out universal traits we all value as motorcycle enthusiasts. It's common ground to just talk about motorcycling in the purest sense, without all the stylistic add-ons that often sway our preferences in one direction or the next.

Terrain is also a factor, so for the sake of establishing some kind of specificity in this post, let's focus on road bikes. They could be cruisers, tourers, sportbikes, streetfighters, classics, do-it-alls, and everything in between.

As a tentative discussion starter, I have a feeling that the essential characteristics of a universally loved machine are, but aren't limited to:

  1. Reliability
  2. Accessibility (includes price tag)
  3. Ease of use
  4. Maintenance
  5. Fun (quite the subjective point, yet still important)

Considering this, my personal "un-hateables" are, in no particular order:

  • Yamaha MT-07 or Tenere 700 (or just the CP2 engine) - it's tried and true, it's built to last, it's powerful enough, it's fun, relatively inexpensive, and although it's relatively new, it has amassed a ton of street cred since its inception.
  • Honda Super Cub (idk if this would be considered un-hateable for most, but it's definitely proven just how reliable it can be, blowing much, if not all, of the competition out of the water) - If I were stranded on some huge island and needed to get around for as long as possible (assuming there are gas stations and some small supply of oil), I'd pick the Super Cub 9 times out of 10.
  • Suzuki V-Strom 650 (comfortable, bulletproof engine, can do literally anything, good 2-up option, easily modifiable to address suspension concerns) - Overall, the near-perfect blank canvas to etch your riding journey onto.

Being "un-hateable" is like being ordinary... forgettable, even. But we might underestimate just how important forgetting about the bike actually is. Because everything else is what we care about: the journey, the miles, the people we meet, and wherever the invisible machine can get us to without too much fuss.

It's a riding philosophy that might not resonate with everyone, because it's not the coolest, nor the flashiest, but I don't think that makes it in any way insufficient.

It's motorcycling, stripped down to the bare essentials. Everything else is just life.

u/Dan-F9 — 3 days ago

Posted this on r/Fortnine recently, and it was suggested to me to reach out to some of our larger communities for more feedback, so here I am!

Context: I am in the process of creating a motorcycle helmet testing method at FortNine, where we stack up popular models against each other (in a given category) and provide data on them, eventually scoring them on a /10 scale.

It's currently v.1.0, so it doesn't get any newer than that. Critique it, roast it, the more the better. The goal is to perfect this process as much as I can, so that the results best correspond to what you all actually care about.

The method can be found on our website (link below), but I'll also paste it below!
https://fortnine.ca/en/how-we-rate-motorcycle-helmets

Thank you in advance for taking the time out of your day to read, comment and critique, it goes a long way!

-

Testing Objectives

To make the helmet shopping experience as informative and easy as possible, highlighting the key elements and differences that make for an excellent helmet for every application, budget, and price range.

We do this by publishing:

  • Clear scoring criteria;
  • A standardized test & review for every individual helmet we select;
  • Comparative data between models that indicates which models perform best;
  • An complementary and more subjective hands-on commentary, based on real time wearing the helmet.

What We Do

We verify and measure the things riders actually care about:

  • Certifications (as labeled on the helmet), not marketing claims;
  • Helmet data (materials, liner and comfort info, included items like Pinlock where applicable);
  • Performance metrics we can test without destroying helmets (field of view, ventilation performance, noise, weight, fog resistance when possible, retention system performance, modular chinbar and latch design intent where relevant);
  • Structured subjectivity for comfort and usability (real people wearing helmets)

What We Don't Do

We do not replicate certification impact attenuation testing in-house, because meaningful impact testing is inherently destructive. Instead, we lean on recognized certifications for that aspect and focus our lab efforts on the measurable performance factors you experience every ride.

How We Keep Reviews Unbiased

No brand preferences here; we purchase the helmets we test from our suppliers and run our series of non-destructive tests in the same way, every time (the published methodology version is always noted at the beginning of each review).

The price of a given helmet has no impact on its score, but it could affect our attribution of "value," if another given helmet has the exact same features and performance at a reduced price.

Finally, there is always one primary tester per helmet, plus re-testing if a result looks unusual compared to similar models. Additionally, a size L (58cm head) model is used for additional comfort tests. Our models change depending on head shape (round, intermediate oval, long oval), but are always within the 57-58cm head size (typical size L) range.

F9 Helmet Score (0-10), Explained

This is the single numeric rating for each helmet. It represents performance in our standardized test categories, weighted exactly as described after this section.

Where "Value" Fits

We do not publish a separate Value Score. Instead, we provide value context as a comparative tool. This includes:

  • A pricing context at time of review (when possible);
  • A "Value note" in the Pros/Cons section (example: "Premium price, premium ventilation and optics" or "Costs more than its noise performance justifies");
  • A "Best for…" section, with use-cases that naturally communicate who should buy it (and who shouldn’t).

This keeps the score focused on performance, while still giving our shoppers the nuance they are looking for.

How the Score Is Calculated

Step 1: We score each category from 0–10

Each category gets a 0–10 score based on:

  • Measured data where possible (degrees, grams, millimeters, dBA, etc.)
  • Rubric-based evaluation where measurement isn’t practical (comfort and usability, build quality checklists, etc.)

Step 2: We apply published weights

Baseline weighting (v1.0):

  • Protection: 25%
  • Fit & stability: 20%
  • Vision/optics/fog: 15%
  • Ventilation: 15%
  • Noise: 10%
  • Comfort liner/interior: 10%
  • Build/sealing/durability: 5%

F9 Helmet Score = weighted average of category scores.

Step 3: "Not applicable" handling by helmet type

Not every helmet type is built to win the same race. Some metrics don’t apply to certain categories (example: some aspects of vision/noise expectations differ for open-face helmets).

When a category is not applicable:

  • It is marked N/A;
  • Its weight is redistributed proportionally across the remaining applicable categories for that helmet type;
  • The adjusted weighting is stated on the review or category page, so the math is never hidden.

What We Test

1) Fit and Head-Shape Compatibility

Goal: assess comfort and fit beyond a basic size chart.

First, we record the objective fit mapping as stated by the manufacturer. We then test it with our human fit panel, corresponding to the head shape and size tested.

Wear Protocol: 10 min break-in, followed by 20 min wear test. With the helmet on, our model notes pressure points, comfort details, and other complementary information like if the helmet is glasses-friendly, and if its mechanisms are easy to operate (for example: ventilation tabs, open & closing of visor, buckle accessibility and ease of use).

Rubric scoring (1–10) for:

  • Forehead/temple/jaw pressure
  • Stability under movement (standardized shake routine)

2) Protection

This step is more of a verification of the certifications present on the back of each helmet. We note the exact sticker and date of certification (when applicable), as well as any additional certifications that the helmet has passed.

Extra features such as emergency-release cheek pads, inflatable cheek pads and rotational management are also noted.

3) Vision, Optics and Fog

Our goal is to measure what can actually be seen, and how well the visor stays clear. In this test, we include:

  • Field of view (FOV): horizontal & vertical;
  • Fog resistance (when a visor has been treated with anti-fog coating, we noted the result as N/A and state why): time to fog, placing a humidifier within the helmet, in a controlled environment where the visor is cold to begin with (simulating real-world conditions).
  • We state features like pinlock-ready, and if a pinlock insert is included or not.

4) Ventilation

We note and list vent position, along with the number of ventilation intakes and exhaust channels. Ease of operation is also mentioned. It goes without saying, but this section (along with vision and noise) of the test is marked as N/A for Open Face helmets.

5) Noise

Goal: to quantify interior noise as consistently as possible, providing comparative data across all full face and modular helmet models.

We do this by placing a microphone inside the helmet, and using a leaf blower at a distance of approximately 3 feat. We then record the dB measurement with vents open and vents closed, 3 times per vent configuration. The average of 3 is then used as the final test result, giving us 1 variable per vent configuration, for a total of 2 dB readings.

The results are then displayed next to similar helmets in the same category, showing how well the tested helmet performs in comparison.

6) Interior (Comfort Liner)

Liner thickness is measured in mm, as well as any tools required to remove components. Notes on comfort due to liner thickness are mentioned, as well as glasses compatibility.

7) Build Quality

Goal: to identify potential failure points and real-world durability concerns. We examine things like seals, visor mechanism, shell finish, hardware quality and EPS finish quality.

8) What's In the Box

This section is primarily additional shopping information. We document exactly what you receive, included items and extras. If there's a discrepancy between what the manufacturer says and what we've got, we blind check another box and confirm the facts.

9) Our Take and Final Score

Additional notes, and a more subjective commentary based on our experience as reviewers. Finally, an F9 Score is attributed to the tested helmet.

Bias Controls, Retests, and Methodology Updates

If something looks off (too good or too bad compared to similar helmets), we re-run the relevant tests. We also maintain a methodology change log so future updates (v1.1, v1.2…) are transparent, and older helmets can be re-tested when necessary for fair comparisons.

u/Dan-F9 — 9 days ago

Context: I’ve been writing these weekly topics for some time now, and I send them out as part of a newsletter called The Break-In. I was today years old when I realized I hadn’t yet written a topic pertaining to the meaning behind the newsletter’s title.

Not that anybody asked, but here it finally is.

I’ve always found the break-in phase of a new motorcycle quite poetic. Because the recommended “easing-in” concerns not only the machine, but the rider themselves.

I was a new rider rushing to the Yamaha dealership once, driven by a blind eye toward my wallet and pure enthusiasm for the sport. At the lot, I was told: keep the revs reasonable, vary the load, and don’t abuse the thing straight out of the dealership. But a new bike does something to the mind. It fills it with promise, eagerness, and a willingness to explore its limits.

Naturally, a part of me wanted access to all of it now, not eventually. Maybe this created a kind of temptation to open the throttle too soon, to convince myself that familiarity meant pushing something to the limit. And to make matters worse, I then stumbled upon a few articles talking about the benefits of a “hard” break-in, and how this is exactly the kind of method you "should" be using.

Whether that is or isn’t the case isn’t really the point of this article, because I would hesitate to recommend cozying up to one’s limits, especially when you’re a beginner. As a newbie, there are just too many variables to consider, not to mention the much less flattering question: why the hell are you purchasing a brand new FZ-07 in the first place?

We often create this image of the kind of rider we want to be way before we set foot in the dealership. So much so that we become easy prey for salesmen, because their marketing machine has already accomplished its task, and they’re just there to validate and close. But this doesn’t have to be the case.

The whole affair of purchasing a new bike as a beginner often reveals a kind of debilitating excitement brewing under the surface. And that excitement matters, because desire has this way of clouding judgment while convincing us that we are thinking clearly. You can feel yourself chasing something at the end of some rainbow, without having gone through the motions that would actually equip you to handle it.

That’s why I think the first break-in should concern you, your mindset, and your patience, rather than the bike itself. It’s what actually determines the rider you’re going to be, not the one you’ve dreamt of being.

A good break-in period teaches self-mastery, a kind of restraint that allows you to examine why you want to pursue the sport, and what you really need to get started (not what makes you look the coolest). If the machine has a recommended process to respect, so should you. That means setting a standard for the limits of your comfort, by building a relationship with a bike that is, at first, inaccessible to you.

Because a new bike is uncharted territory. It doesn’t become properly yours the moment you buy it. First, it has to become legible to you, which means giving it the time to teach you how it wants to be ridden. There’s no need to force your own enthusiastic expectations onto it. Those are often the first things that need wearing down.

Maybe that’s what The Break-In has meant all along. Not just the careful wearing-in of a machine, but the slow correction of the fantasies that make us rush toward things we haven’t yet learned to understand.

Before the bike becomes yours in any meaningful sense, your desires have to stop getting in the way.

u/Dan-F9 — 10 days ago
▲ 24 r/Fortnine+1 crossposts

What, 2 posts in 1 week? Perhaps I'm getting ahead of myself, but I am in the process of creating a moto helmet testing method at FortNine, where we stack up popular models against each other (in a given category) and provide data on them, eventually scoring them on a /10 scale.

I wanted to run this by our community and get your feedback on the methodology. It's currently v.1.0, so it doesn't get any newer than that. Critique it, roast it, the more the better. The goal is to perfect this process as much as I can, so that the results best correspond to what you all actually care about.

The method can be found on our website (link below), but I'll also paste it below!
https://fortnine.ca/en/how-we-rate-motorcycle-helmets

Thank you in advance for taking the time out of your day to read, comment and critique, it goes a long way in not getting me canned!

-

Testing Objectives

To make the helmet shopping experience as informative and easy as possible, highlighting the key elements and differences that make for an excellent helmet for every application, budget, and price range.

We do this by publishing:

  • Clear scoring criteria;
  • A standardized test & review for every individual helmet we select;
  • Comparative data between models that indicates which models perform best;
  • An complementary and more subjective hands-on commentary, based on real time wearing the helmet.

What We Do

We verify and measure the things riders actually care about:

  • Certifications (as labeled on the helmet), not marketing claims;
  • Helmet data (materials, liner and comfort info, included items like Pinlock where applicable);
  • Performance metrics we can test without destroying helmets (field of view, ventilation performance, noise, weight, fog resistance when possible, retention system performance, modular chinbar and latch design intent where relevant);
  • Structured subjectivity for comfort and usability (real people wearing helmets)

What We Don't Do

We do not replicate certification impact attenuation testing in-house, because meaningful impact testing is inherently destructive. Instead, we lean on recognized certifications for that aspect and focus our lab efforts on the measurable performance factors you experience every ride.

How We Keep Reviews Unbiased

No brand preferences here; we purchase the helmets we test from our suppliers and run our series of non-destructive tests in the same way, every time (the published methodology version is always noted at the beginning of each review).

The price of a given helmet has no impact on its score, but it could affect our attribution of

F9 Helmet Score (0-10), Explained

This is the single numeric rating for each helmet. It represents performance in our standardized test categories, weighted exactly as described after this section.

Where "Value" Fits

We do not publish a separate Value Score. Instead, we provide value context as a comparative tool. This includes:

  • A pricing context at time of review (when possible);
  • A "Value note" in the Pros/Cons section (example: "Premium price, premium ventilation and optics" or "Costs more than its noise performance justifies");
  • A "Best for…" section, with use-cases that naturally communicate who should buy it (and who shouldn’t).

This keeps the score focused on performance, while still giving our shoppers the nuance they are looking for.

How the Score Is Calculated

Step 1: We score each category from 0–10

Each category gets a 0–10 score based on:

  • Measured data where possible (degrees, grams, millimeters, dBA, etc.)
  • Rubric-based evaluation where measurement isn’t practical (comfort and usability, build quality checklists, etc.)

Step 2: We apply published weights

Baseline weighting (v1.0):

  • Protection: 25%
  • Fit & stability: 20%
  • Vision/optics/fog: 15%
  • Ventilation: 15%
  • Noise: 10%
  • Comfort liner/interior: 10%
  • Build/sealing/durability: 5%

F9 Helmet Score = weighted average of category scores.

Step 3: "Not applicable" handling by helmet type

Not every helmet type is built to win the same race. Some metrics don’t apply to certain categories (example: some aspects of vision/noise expectations differ for open-face helmets).

When a category is not applicable:

  • It is marked N/A;
  • Its weight is redistributed proportionally across the remaining applicable categories for that helmet type;
  • The adjusted weighting is stated on the review or category page, so the math is never hidden.

What We Test

1) Fit and Head-Shape Compatibility

Goal: assess comfort and fit beyond a basic size chart.

First, we record the objective fit mapping as stated by the manufacturer. We then test it with our human fit panel, corresponding to the head shape and size tested.

Wear Protocol: 10 min break-in, followed by 20 min wear test. With the helmet on, our model notes pressure points, comfort details, and other complementary information like if the helmet is glasses-friendly, and if its mechanisms are easy to operate (for example: ventilation tabs, open & closing of visor, buckle accessibility and ease of use).

Rubric scoring (1–10) for:

  • Forehead/temple/jaw pressure
  • Stability under movement (standardized shake routine)

2) Protection

This step is more of a verification of the certifications present on the back of each helmet. We note the exact sticker and date of certification (when applicable), as well as any additional certifications that the helmet has passed.

Extra features such as emergency-release cheek pads, inflatable cheek pads and rotational management are also noted.

3) Vision, Optics and Fog

Our goal is to measure what can actually be seen, and how well the visor stays clear. In this test, we include:

  • Field of view (FOV): horizontal & vertical;
  • Fog resistance (when a visor has been treated with anti-fog coating, we noted the result as N/A and state why): time to fog, placing a humidifier within the helmet, in a controlled environment where the visor is cold to begin with (simulating real-world conditions).
  • We state features like pinlock-ready, and if a pinlock insert is included or not.

4) Ventilation

We note and list vent position, along with the number of ventilation intakes and exhaust channels. Ease of operation is also mentioned. It goes without saying, but this section (along with vision and noise) of the test is marked as N/A for Open Face helmets.

5) Noise

Goal: to quantify interior noise as consistently as possible, providing comparative data across all full face and modular helmet models.

We do this by placing a microphone inside the helmet, and using a leaf blower at a distance of approximately 3 feat. We then record the dB measurement with vents open and vents closed, 3 times per vent configuration. The average of 3 is then used as the final test result, giving us 1 variable per vent configuration, for a total of 2 dB readings.

The results are then displayed next to similar helmets in the same category, showing how well the tested helmet performs in comparison.

6) Interior (Comfort Liner)

Liner thickness is measured in mm, as well as any tools required to remove components. Notes on comfort due to liner thickness are mentioned, as well as glasses compatibility.

7) Build Quality

Goal: to identify potential failure points and real-world durability concerns. We examine things like seals, visor mechanism, shell finish, hardware quality and EPS finish quality.

8) What's In the Box

This section is primarily additional shopping information. We document exactly what you receive, included items and extras. If there's a discrepancy between what the manufacturer says and what we've got, we blind check another box and confirm the facts.

9) Our Take and Final Score

Additional notes, and a more subjective commentary based on our experience as reviewers. Finally, an F9 Score is attributed to the tested helmet.

Bias Controls, Retests, and Methodology Updates

If something looks off (too good or too bad compared to similar helmets), we re-run the relevant tests. We also maintain a methodology change log so future updates (v1.1, v1.2…) are transparent, and older helmets can be re-tested when necessary for fair comparisons.

u/Admirable_Chipington — 13 days ago