Tuesday, July 31, 2018


Programing is equal parts science and art. There is no single program that will work the same for every person. However, when we take into consideration general human physiology, pertinent systems such as neurological, endocrine, and muscular, we can create a base template for an effective regime. From there comes the art of micro manipulations to address an individual athlete's strengths and weaknesses.
Stronger athletes are faster athletes. Optimal performance at a 5K or 140.6-mile Ironman race demands that the body maintain proper position and alignment lest imperative biomechanics falter, resulting in slower times and higher risk of injury.
When it comes to training strength, Louie Simmons cannot be denied. The founder and power-lifting guru well-known for his contributions to the strength community and developer of the Conjugate Method has created some of the strongest people on the planet. Simmons knows our best shot at getting strong is focusing on maximal loads of three movements: squat, bench, and deadlift. These three lifts permit the body to move heavy weight, which requires the greatest amount of muscular contraction and, in turn, the greatest release of prohormones such as somatropin (growth hormone) and testosterone. Those hormones directly and positively impact our ability to recover from strenuous activity (such as our four- to five-hour Saturday “brick” training event). Simmons also knows that focusing entirely on these three movement patterns is not enough to optimize an athlete’s strength or performance.
Enter auxiliary movements and intensity training. 
Photo by Petty Officer 2nd Class Paul Cox, courtesy of U.S. Armed Forces Sports.

This is where Greg Glassman (founder of CrossFit) comes in. If for nothing else, Glassman has ignited the imagination of fitness enthusiasts the world over. The core philosophy of CrossFit revolves around constantly varied, functional movements done at high intensity. “Functional movement” is highly subjective and should be treated as such. For example, the box step-up is a much more functional movement pattern for a triathlete or cyclist than the sexier plyometric variation, the box jump. Intensity is a highly effective means of breaking plateaus; however, if you find yourself on your back, heaving, gasping for breath more than once or twice a week, you are likely doing more harm than good.
“Constantly varied” is random. Random is the stagnation of athletic progression.
Enter endurance sport periodization: Working backward from your event on a week-to-week template based around your life/work schedule.
What is prohibitive about traditional endurance periodization is that it typically allows an athlete to be at or near their peak performance capability a few times a year. This is fine if you feed your family based on the results of those few events, but even among professional endurance athletes, this is the exception rather than the rule. Traditional endurance periodization calls for egregious levels of “base miles” to be accumulated, typically in the winter months. These training events are conducted in an oxidative metabolic pathway (aerobic state). That oxidative stress acts like rust on your soft tissue, muscle, skin, and connective tissue. Furthermore, your ability to compete at optimal levels at short-course events (such as a 5K) during these long, slow periods is greatly inhibited. Most endurance programs omit strength training during “the season.” However, incorporating strength training throughout the year ensures an athlete will recover from races faster.
Photo by Petty Officer 2nd Class Paul Cox, courtesy of U.S. Armed Forces Sports.

What follows is the result of over 10,000 hours of implementing perceivably conflicting training methodologies with over 1,000 endurance, strength, and military athletes all over the world. This is the TF Black Method.  
Endurance (Triathlete)
Week 1: Baseline
It is important to establish baseline numbers for strength and stamina. These numbers will be how you determine future training events as well as allow for an empirical metric of improved fitness levels. So be sure to record results.
Always spend the appropriate time on a proper warm up prior to exercise.
Deadlift: 5 at 60%, 4 at 70%, 3 at 80%, 2 at 90%, 1 at max
Swim time trial: SC 500m LC 1,000m UC 1,500m
Run time trial: SC 1 mile; LC 3 miles; UC 5 miles
Muscular endurance: 5 pull-ups, 10 push-ups, 15 air squats; as many rounds as possible in 20 minutes
Active recovery: Technique-based swim, yoga, or stretching; 1 hour non-laborious movement
Bike time trial: SC 10 miles; LC 20 miles; UC 40 miles
Active recovery: Technique-based swim, yoga, or stretching; 1 hour non-laborious movement

For week 1 percentages, if unsure of max, use rate of perceived exertion. (If it feels like “pretty heavy” for five but still manageable, you’re likely around 60 percent. If it feels “holy shit!” for one, you’re probably at or nearing a maximal level. Don’t go past “holy shit!”) Maintain long recovery periods on Monday (2 to 3 minutes between sets)
Take an honest look at each of the results. Your future performance depends on your ability to identify and attack your weakness. Remember, stronger athletes are faster athletes.
Photo by Petty Officer 2nd Class Paul Cox, courtesy of U.S. Armed Forces Sports.

You now know where your fitness (strength, endurance, sport-specific) level is. Where do you want to be? How long do you have to get there? Work backward. A generic rule for triathlon is one month of training for every hour you anticipate racing. For your first Ironman, it’s reasonable to begin preparing a year or more in advance. For brevity's sake, let's focus our macro program on the sprint distance, assuming our event is 10 weeks out.
Macro Planning
The most important training of the week for a triathlete is the “brick” event. This consists of running immediately after cycling. This should be done as close to race time as possible on the day of the week (typically Saturday).
The second most important training event of the week, contrary to popular belief, is the heavy lift day. Again, this has everything to do with hormone response. Each week, this should be completed five days prior to your brick event. Active recovery days should be taken on days of the week to support these training events, preferably the day before and after the brick to allow for optimal race day simulation performance.
Photo by Petty Officer 2nd Class Paul Cox, courtesy of U.S. Armed Forces Sports.

Given the baseline results, we can see this is a strong athlete, which transfers well to the bike but not well to the run. The athlete’s swim, while not elite, shows a relative comfort in the water. This athlete will benefit most from learning how to run well on tired legs.
This is a basic outline of a sprint program shown to give reference points to progression. Here are the abbreviations you need to know:
RP: Race Pace
AR: Active Recovery
DL: Deadlift
FS: Front Squat
KBS: Kettlebell swing
TT: Time Trial
RFT: Rounds For Time
ME: Max Effort
Lift (On Monday)
Muscular Endurance
Baseline results
DL  315 lbs.
12 rounds
500m in 10 min.
10 mi. in 30 min.
1 mi. in 10 min.
Week 1
Squat 5-4-3-2-1-1
Same % as DL
6 RFT of... 200m run, 20 knee to elbow, 20 kettlebell swings
6 rounds
100m @ RP
50m @ ME
Rest 1 min
3x10 min. TT @ ME

4 min. AR spin
Mile repeat
25% rest
B30/R10 min
Week 2
DL 3x3 @ 80-85%

100 step-ups,
400m run,
75 kbs,
400m run,
50 push-ups,
400m run,
25 pull-ups,
400m run
3x250m at RP
Rest 25%

(4:40 work,
1:10 recover)
8x5 min. TT
@ ME

2 min. AR spin
@ ME
Rec. 90 sec


Week 3
Squat 3x3 
@ 80-85%
20 wall ball
400m run
4x10min TT
3 min. AR spin
10x400m @ ME
Rec. 1 min.
Week 4
DL 2-2-2-2 @ 85%
800m run, 
35 burpees
250m sprint, 500m TT
100m kick
1K-800m- 400-400-
Rec. 25%
Week 5
Squat 2-2-2-2
@ 85%
FS 3-3-3-3
@ 70% of BS
1K run, 100 weighted step-ups, 400 flutter kicks, 1K run
Rest 2 min.

3x50 kick
Rest 20 sec.
20 min. @ ME
10 min. AR spin
15 min. @ ME
10 min. AR


Week 6
DL 1-1-1-1-1 
@ 90%

15 thrusters
200m run
10x100 @ ME
Rest :40
4x10 min. 
@ ME
2 min. AR spin
Hill repeat
10x2 min. @ ME
Walk down
1 min. rest
Week 7
Squat 1-1-1-1-1 
@ 90%
Retest baseline
5-10-15 for 20
8x100 @ ME
Rest: 30

Rec. 1 min.
35/5x5 min. w/ 1 min. rest
Race week
DL 5-4-3-2-1-1
@ 60-70-80-90-100-100+
AR: Swim
10x50m @ RP
20 min. easy spin with 4-5, 1 min. RP intervals
365-lb. deadlift
16 rounds
13 min. 750m
35-min. 20K (12.5 mi.)
28-min. 5K (3.1 mi.)


Broadcasting has changed a lot in the last few decades. We have satellite radio, internet streaming, HD radio all crowding out the traditional AM and FM bands. FM became popular because the wider channels and the modulation scheme allowed for less static and better sound reproduction. If you’ve never tried to listen to an AM radio station at night near a thunderstorm, you can’t appreciate how important that is. But did you know there was another U.S. broadcast band before FM that tried to solve the AM radio problem? You don’t hear about it much, but Apex or skyscraper radio appeared between 1937 and 1941 and then vanished with the onslaught of FM radio.
If you’ve heard of Apex radio — or if you are old enough to remember it — then you are probably done with this post. For everyone else, consider what radio looked like in 1936. The AM band had 96 channels between 550 and 1500 kHz. Because those frequencies propagate long distances at night, the FCC had a complex job of ensuring stations didn’t interfere with each other. Tricks like carefully choosing the location of stations, reducing power at night, or even shutting a station down after dark, were all used to control interference.
In addition, AM radios (like the 1924 Atwater Kent here) didn’t sound all that great. The narrow bandwidth wasn’t very enough for music reproduction. The amplitude modulation was susceptible to noise and fading. Adjacent channels had a tendency to interfere with each other. When radio appeared, it was so close to a miracle to hear anything at all that none of this seemed very important, but as it became an integral part of society, these things were all negatives.
In 1932, the FCC created three experimental frequencies at 1530, 1550, and 1570 kHz allowing a wider signal than on conventional frequencies. However, only four stations operated on these channels.


Meanwhile, engineers were finding ways to get higher and higher frequencies. They found that above 20 MHz, propagation was usually very limited — practically line of sight in most cases. While you would think that distance is what you want in a communication system, from the FCC’s point of view being able to limit a broadcaster to the immediate area was quite attractive.
The FCC started encouraging broadcasters to experiment with “ultra high frequencies” above 20 MHz and in 1934, W8XH (a broadcaster; not a ham station) started regular broadcasts in Buffalo, New York. Part of WBEN, a traditional AM station, anyone who wanted to listen had to build their own equipment and most programming was just a rebroadcast of the companion AM station. However, in 1936 W9XAZ in Milwaukee did produce regular original programming.
The receiver problem was particularly an issue since most receivers topped out at 20 MHz. You could build a converter, but you lost some of the advantages that way. It would be 1937 before you could buy radios like the Raco R-S-R Clipper (left), the RCA Magic Brain (see below for an example advertisement), or a McMurdo Silver (see the video, below) that could tune into the “ultra shortwave” bands. Their performance at these frequencies often left something to be desired.
As any ham will tell you, while propagation above 20 MHz is generally line of sight, it isn’t always. The FCC noted that W6XKG in Los Angles had been heard in Asia and Europe. W9XAZ would sometimes be audible in Australia along with other stations from the United States.


In 1937, the FCC decided to make official the high-frequency broadcast band by putting 75 channels at 41.02 MHz with 40 kHz spacing between channels. This was four times the width of an AM channel, so there was less interference and it could accommodate a better-sounding — but still AM — signal. There were about 50 stations using high frequencies that had to move to the new band. In other words, Apex was AM radio on higher frequencies with a regimented bandplan. Not  bad idea at all.
But the new band only lasted about four years. Edwin Armstrong was pushing for FM radio service, and the FCC was amazed at the audio quality possible with that system. By 1939, the commission encouraged Apex broadcasters to move to FM. In 1940, they reallocated the band to support 40 FM channels ranging from 42 to 50 MHz. RCA would later lobby to move the band again although there is a debate if it was for technical reasons or just to spite Armstrong by making his equipment obsolete.


The last two Apex stations were Cleveland’s WBOE that converted to FM in February 1941. Ironically, in 1938, it had become the first station to broadcast in the Apex channels reserved for non-commercial educational stations and Kentucky’s WBKY which closed down in June of that year.
Why the name? The line of sight broadcasting required the antenna to be up at an apex or skyscraper. Next time you see an old radio with the “Apex” band on it, you’ll know why.

This Prosthesis Lets You Multitask With Three Arms

This Prosthesis Lets You Multitask With Three Arms: The study showed that wearing the arm can actually nurture the wearer’s own multitasking skills over time, even after ditching the extra limb.


The Kepler spacecraft is in the final moments of its life. NASA isn’t quite sure when they’ll say their last goodbye to the space telescope which has confirmed the existence of thousands of exoplanets since its launch in 2009, but most estimates give it a few months at best. The prognosis is simple: she’s out of gas. Without propellant for its thrusters, Kepler can’t orient itself, and that means it can’t point its antenna to Earth to communicate.
Now far as spacecraft failures go, propellant depletion isn’t exactly unexpected. After all, it can’t pull into the nearest service station to top off the tanks. What makes the fact that Kepler will finally have to cease operations for such a mundane reason interesting is that the roughly $600 million dollar space telescope has already “died” once before. Back in 2013, NASA announced Kepler was irreparably damaged following a series of critical system failures that had started the previous year.
But thanks to what was perhaps some of the best last-ditch effort hacking NASA has done since they brought the crew of Apollo 13 home safely, a novel way of getting the spacecraft back under control was implemented. While it was never quite the same, Kepler was able to continue on with modified mission parameters and to date has delivered so much raw data that scientists will be analyzing it for years to come. Not bad for a dead bird.
Before Kepler goes dark for good, let’s take a look at how NASA managed to resurrect this planet hunting space telescope and greatly expand our knowledge of the planets in our galaxy.


To understand the problems Kepler ran into, it’s important to understand how Kepler searches for planets. The telescope watches a section of the sky and carefully notes the dimming and flickering of individual stars. While on Earth the stars appear to twinkle due to refraction as the light passes through our atmosphere, in deep space the light from stars should be constant unless it’s physically blocked by something. Operating under this principle, Kepler looks for disruption in the light from a star which can indicate that there’s a planet in orbit around it. With careful observation it’s possible to determine the size and number of planets around each star; allowing us to virtually image distant solar systems.
As you might expect, for this to work Kepler must be able to control its orientation in space very carefully. The stars need to remain relatively stationary from the perspective of the telescope to minimize false positives. As anyone who does astrophotography here on Earth can tell you, there are ways to compensate for drift and noise to get clearer images of the sky. But for the best results the camera really needs to be locked onto the stars as closely as possible.
Photo credit: Ball Aerospace via SpaceNews
To maintain its orientation, Kepler was outfitted with thrusters and four reaction wheels: flywheels that are used to store angular momentum and apply torque on the spacecraft. Thrusters are ideal when large changes to the spacecraft’s orientation are required, with the reaction wheels reserved for small and precise adjustments. Unfortunately, in 2012 one of Kepler’s reaction wheels started having problems. This left three remaining, which was enough to continue on with the mission, but in 2013 another wheel shut down. With only two functioning reaction wheels the spacecraft was unable to precisely align itself during observations, effectively ending its original mission.
At this point the mission had already accomplished its scientific goals. Even if Kepler never looked at another star again it would have still been a huge success. But as the spacecraft was still largely functional, NASA started looking for a way to utilize it with a revised scientific mission the agency referred to as K2.


NASA realized that with only half of the reaction wheels operational, there was no way for Kepler to equally apply torque in all dimensions. Trying to use the two remaining wheels would simply cause the spacecraft to tumble. What NASA needed was a way to apply some sort of pressure on the craft which the remaining reaction wheels could push against. The solution came from a rather surprising place: the sun.
The fix NASA came up with exploits the fact that photons striking the spacecraft exert a constant, if slight, force. By carefully maneuvering Kepler to face the sun in the proper orientation, the two remaining reaction wheels can be used to apply torque in opposition of the solar pressure. Once equilibrium is reached, the spacecraft is balanced well enough that it can continue making observations.
It’s not a perfect solution. Positioning Kepler with this method consumes more thruster propellant than would normally be required, and there’s been a considerable drop in sensitivity due to the fact that this careful balancing act isn’t quite as stable as when orientation was being controlled by all four reaction wheels. But even still, the Kepler K2 mission has managed to collect invaluable scientific data over the last four years from a spacecraft that many wrote off as dead.


Had all four reaction wheels remained operable, Kepler probably wouldn’t be running out of propellant right now. Ironically, the increased propellant utilization necessary to keep Kepler positioned relative to the solar wind has, in a way, hastened the end of the mission. But considering the alternative was to have shut Kepler down in 2013 when it first started tumbling through space, it was a supremely successful hack.
As it stands, NASA isn’t 100% sure how much propellant Kepler still has in the tanks. Believe it or not, there’s no way to tell with the sensors onboard. They can estimate based on how much was in the tanks when it left Earth and how many burns they’ve done, and that tells them they are getting down to the wire. But until they command the thrusters to fire and nothing happens, there’s really no way to know for sure if the tanks are dry. Accordingly, NASA is limiting thruster usage as much as possible.
When the thrusters start sputtering and NASA knows Kepler is in the final moments of its life, they will command it to point its high-gain antenna to Earth and make one last broadcast of all the data its collected before powering down forever. Kepler’s orbit is far out enough that it will never return to Earth or get close enough to anything else in the solar system to be a problem. It will likely spend the rest of eternity as a deep space monument to human ingenuity and our insatiable need to explore.

Monday, July 30, 2018

Understand The Difference Between Second Cousins And Cousins Once Removed Save this for your next family reunion. by Genevieve Lill

Remembering the difference between a “second cousin” and a “cousin once removed” is one of those facts that I have filed away in my brain as non-essential. I think I know the difference, but then I get fuzzy on the details.
Luckily I now have a handy chart to bookmark in my web browser so that I never forget again.
You’ll find it useful, too, especially if you’ve been invited to a family reunion this summer.
This chart was designed by Alice J. Ramsey in 1987, but her advice still stands today.
Here’s how to use the chart: Start from the “Self” box, and then trace your way to the relationship you are trying to name.
Remembering what my mom’s cousin’s children are in relation to me is always a tricky one for me. Using this chart, I can see that they are my second cousins. And their kids? My second cousins once removed. Their kids? Second cousins twice removed.

Once Removed—What Does It Mean?

This chart gives us a visual depiction of what “once removed” really means. It’s easy to see that you and all of your cousins—even those second and third cousins—are in the same generation. But when you get into different generations, that’s when it comes “once” or “twice” removed—what that really means is one generation removed, according to the chart.
“For example, your mother’s first cousin is your first cousin, once removed. This is because your mother’s first cousin is one generation younger than your grandparents and you are two generations younger than your grandparents, ” according to an article on Genealogy. “This one-generation difference equals ‘once removed.’ Twice removed means that there is a two-generation difference. You are two generations younger than a first cousin of your grandmother, so you and your grandmother’s first cousin are first cousins, twice removed.”
Once you get your brain to stop spinning, just look at the chart. It will help you understand.
Alice J. Ramsay

So there you have it. You’ll never be cousin-confused again—or if you are, just refer to them all as cousins and call it a day!