Saturday, December 31, 2022

Elecraft KX2 and WSJT-X Easy Setup BY K0PIR

Sometimes all we need is a little QRP transceiver, a weak signal mode and a good antenna. In comes the Elecraft KX-2, WSJT-X and a random wire. It can be used successfully in the field or at home (Ham Shack Portable). Although CW is preferred by many, you’ll find more opportunity to make short contacts (QSLs) using FT8. However, Morse code is more exciting because there’s conversation and more human interaction. The Elecraft KX2 is well suited for CW ops. I think it’s the best backpacking/SOTA HF transceiver around.

elecraft kx2 wsjt-x

Sabrent USB Sound Card

Elecraft KX2 WSJT-X

With the Elecraft KX2 transceiver I am using a Elecraft KXUSB serial cableSabrent USB Sound Cardtwo 3.5mm stereo male-male audio cables, an HP stream laptop running WSJT-X version 2.0.

Sabrent USB Connections to KX2: Sabrent Mic In to Elecraft KX2 Phones. Sabrent Speaker Out to KX2 Mic In.

The antenna I am using inside the Ham Shack is a 80′ random wire with a 9:1 UNUN.

K0PIR Suggested KX2 Settings ( See Video Below):

Receiver and Transmitter
  • MODE: DATA A (WSJT-X will assist, but verify it is Data A mode and correct FIL width)
  • NB: OFF
  • NR: OFF
  • PRE: OFF
  • RF Gain: 0
  • FIL BW: 4 kHz
  • FIL Center: 1.5 kHz
  • AF: 2-10
  • AGC MD: OFF (See: How to Save Your Ears When AGC is OFF)
  • AGC LIM: 20
  • MIC GAIN: 0-50 (4 bars of ALC on transmit
  • PWR: 5 to 7 watts
  • MICBIAS: OFF ( may not be necessary to turn OFF)

My WSJT-X settings:

  • Rig: KX2
  • PTT: CAT
  • Mode: Data/Pkt
  • Split: Fake It

Elecraft KX2 Connections to HP Stream Laptop

Elecraft KX2 Settings for Digital Modes Data A Video

YouTube player
YouTube player
YouTube player

Thank you

More to come on WinKeyer 3 and RTTY FSK. Post your thoughts below in the comment section.

If you have any questions, comments or solutions, please comment below. I prefer the comment section here or in YouTube over e-mail because your comments and questions will help others as well.

Thank you for subscribing to this website. You can also follow me on TwitterFacebookInstagram and YouTube.

73,

Rich, K0PIR

Sources:

The Elecraft KX2 by Fred Cady – KE7X

Elecraft KX2 Owners Manual

How to Save Your Ears When AGC is OFF

Ebay RSS Feed

[ebayfeedsforwordpress feed=”http://rest.ebay.com/epn/v1/find/item.rss?keyword=%28elecraft+kx2%2Celecraft+kx3%29&categoryId1=4670&sortOrder=BestMatch&programid=1&campaignid=5338054206&toolid=10039&listingType1=All&feedType=rss&lgeo=1″ items=”10″] 

Ham Radio 101: Help for the Morse Impaired Posted by Mark Haverstock, K8MSH

On February 23, 2007, the FCC eliminated the Morse code requirement for all U.S.-issued amateur licenses. 

Within 72 hours of the announcement, the American Radio Relay League (ARRL) staff reported a doubling of the requests for study materials for new or upgraded licensees. 

Prospective licensees now had one less hurdle between them and a ham ticket.

Yet, there seems to be a recent resurgence in CW (Morse code) operation. Why? Morse code gets through when SSB fails. This isn’t just the die-hard CW fans speaking. It is a well-known fact. Hams around the world work rare countries every day using CW and power levels ranging from QRP to 100 watts with simple antennas. Portable operations like SOTA and POTA welcome the weight reduction as a result of small but feature-rich CW radios, lithium batteries, and truly portable antennas.

Our local club recently did a ham radio demonstration for several STEM classes. Their favorite topic was Morse code—hard to believe until we heard them plotting ways to use CW so they could pass messages secretly under the noses of teachers and administrators.

Supply chain shortages affect gasoline, baby formula, and computer chips. Ham radio also experiences shortages—primarily Field Day CW operators. Band captains are going crazy trying to recruit experienced CW ops. It seems there’s not enough talent available to copy pileups or send CW exchanges like “7A Ohio.” Predictably, phone and digital sections of the bands will likely be more crowded this year.

Tune In

Ready to give it a try? First, understand that learning Morse code is not hard. However, it takes diligent practice to become proficient. Think of it as learning to play the piano but without taking years to become an effective operator.

You’re going to have to actually listen to Morse code if you ever want to learn it. Being able to tune in CW signals correctly is a critical skill—a good starting point for those ready to tackle what’s needed to become a CW op. Turn on your transceiver and switch to one of the active ham bands. Move to the lower portion of the band where the CW signals are (for example, on 40 meters you’d tune between 7000 and 7125 KHz).

Switch the mode to CW and practice tuning in stations. Seek out one of the CW signals, tune in as close as you can—some radios match your frequency to the other station’s frequency using the spot or auto tune button. If necessary, activate the RIT (Receiver Incremental Tuning) to fine tune the station as you listen to the QSO. Make filter and bandwidth adjustments as necessary to help block interference. Some radios have adjustable speed settings in WPM, which matches the speed of the message being sent and enhances the ability to decode CW.

Electronic Assist

Charts, cheat sheets, and quick memorization schemes can actually slow you down. A decoder may help you sort out those beeping noises and actually get you to listen and connect code to letters, numbers, and eventually words and phrases. There are three main types: hardware decoders which are often paired with keyers; decoders built into some transceivers; and software programs that work with your radio and computer.

SOTAbeams WOLF-100 WOLFWAVE Advanced Audio Processor: Works with any radio using the speaker or headphone jack from your radio. It has bandpass filtering, noise reduction, age-related hearing correction, and a Morse decoder that can decode signals from 1 to 100 WPM. Add to this a CW regenerator that works on the CW signal in the center of the passband. It detects the signal and regenerates the CW with a clean sine-wave, making audio copy easier and cleaning up the background noise for improved accuracy. It also adjusts to received CW speed.

MFJ-461: Place the pocket-size portable MFJ Morse Code/CW Reader near your receiver’s speaker—then watch Morse code turn into text messages as they scroll across a 32-character LCD display. AutoTrak automatically locks on and tracks CW speed to decode high- and low-speed Morse code. A serial port lets you display CW text on a monitor using a computer and terminal program. When it’s too noisy for its internal microphone pickup, you can connect the MFJ-461 to your receiver with a cable.

Using Your Radio as Decoder

If your radio has a built-in CW decoder, follow the operation directions in your transceiver manual. Also note that some transceivers can utilize computer software to expand the display. On the Kenwood TS-590SG, received Morse code is shown in the front display, with 13 characters visible at a time as they scroll by. If you utilize the ARCP-590G control software, characters are shown in a dedicated window on the computer display.

Starting with the K3, Elecraft has used a similar slice of the panel display on their transceivers—even the KX2 and KX3 have this option. You’ll probably want to turn on CWT as a tuning aid. Also, auto-spot can be used to tune in signals.

Your success will often depend on receiver settings. Be sure to adjust the noise floor down to improve the S/N ratio. Also set your CW speed close to how fast you think the other station is sending to you. It doesn’t have to be exact, but within a few WPM. The Yaesu FTDX10 has a reasonably large decode screen, six lines of 40 characters each. There’s also a handy tuning offset indicator below the S-meter to help you precisely match the other station’s frequency.

Software Solutions

Fldigi is a popular modem program for most of the digital modes such as CW, PSK, MFSK, RTTY, Hell, DominoEX, Olivia, and Throb. Its appeal is in the number of digital modes it covers, including CW. Make sure you are using the radio in CW mode and not SSB. This will let you use your radio’s filtering capabilities. (Linux/ Mac/ Windows)

MRP40 decodes received CW audio that’s been fed to your computer’s sound card. The decoded text is then displayed on the computer’s monitor. MRP40 sends and reads CW (5 to 60 WPM), and helps to decode weak DX signals. The Audio Analyzer FFT Display displays the incoming Morse audio spectrum graphically, giving you a full overview of any CW activity in the selected audio band. MRP40 is compatible with Winkeyer USB, SignaLink USB Digital Communications Interface, Microham, and other popular interfaces. (Windows/Mac)

CwGet is a Morse decoder program with built-in options for large type and high contrast colors. No special hardware is required—you can use a single cable to connect the speaker output from a receiver to a computer with a sound card. It translates incoming Morse code into text using the font and colors of your choice, locks onto a signal (AFC), and automatically adjusts to the speed of a CW signal. CwGet does not transmit, but there is a companion program, CwType, that will. (Windows)

Tips for Using Decoders

CW decoders aren’t magic. Understand that they have limitations:

  • Accurate decoding doesn’t happen with fading and poor reception conditions.
  • There is a lot of sloppy code on the air—don’t expect readers to do the impossible when it comes to copying Morse code.
  • Irregular rhythm, speed, and bad spacing affect accurate copy.
  • Nothing can copy a weak signal with lots of noise.
  • Make sure the signal is tuned as close as possible—use spot or auto tune. Signals way off frequency will not decode well.
  • Use filtering to improve copy.
  • Reduce the noise floor with RF gain control or attenuator.
  • Invalid characters are displayed as block characters, spaces, or the letter E—usually a result of weak signals.
  • With continued practice, it’s easier to fill in the blanks/missing characters.

Copying strong, well-sent code, especially messages sent with electronic keyers or keyboards, will always produce the best results. Tune into a W1AW code practice session sometime and see for yourself.

The ultimate CW decoder will always be the human brain, but software/firmware can also do a pretty good job helping you become more proficient. May the Morse be with you . . .

Ham Radio on a Summit in the Snow

Logging Software (#860)

10+ Outdoor Survival Skills & Bushcraft Tips

Overnight Mountain Train Trip (VIA Rail)

Fast Solar Wind Blows in the New Year | Space Weather News 12.29.2022

What is Ham Radio Deluxe?

Understanding Biden and Red Ink by Dan Mitchell

I don’t worry much about budget deficits. Simply stated, it is far more important to focus on the overall burden of government spending.

To be sure, it is not a good idea to have too much debt-financed spending. But it’s also not a good idea to have too much tax-financed spending. Or too much spending financed by printing money.

Other people, however, do fixate on budget deficits. And I get drawn into those debates.

For instance, I wrote back in July that Biden was spouting nonsense when he claimed credit for a lower 2022 deficit. But some people may have been skeptical since I cited numbers from Brian Riedl and he works at the right-of-Center Manhattan Institute.

So let’s revisit this issue by citing some data from the middle-of-the-road Committee for a Responsible Federal Budget (CRFB). They crunched the numbers and estimated the impact, between 2021 and 2031, of policies that Biden has implemented since becoming president.

The net result: $4.8 trillion of additional debt.

By the way, this is in addition to all the debt that will be incurred because of policies that already existed when Biden took office.

If you want to keep score, the Congressional Budget Office projects additional debt of more than $15 trillion over the 2021-2031 period, so Biden is approximately responsible for about 30 percent of the additional red ink.

Some readers may be wondering how Biden’s 10-year numbers are so bad when the deficit actually declined in 2022.

But we need to look at the impact of policies that already existed at the end of 2021 compared to policies that Biden implemented in 2022.

As I explained back in May, the 2022 deficit was dropping simply because of all the temporary pandemic spending. To be more specific, Trump and Biden used the coronavirus as an excuse to add several trillion dollars of spending in 2020 and 2021.

That one-time orgy of spending largely ended in 2021, so that makes the 2022 numbers seem good by comparison.

Sort of like an alcoholic looking responsible for “only” doing 7 shots of vodka on Monday after doing 15 shots of vodka every day over the weekend.

If that’s not your favorite type of analogy, here’s another chart from the CRFB showing the real reason for the lower 2022 deficit.

I’ll close by reminding everyone that the real problem is not the additional $4.8 trillion of debt Biden has created.

That’s merely the symptom.

The ever-rising burden of government spending is America’s real challenge.

P.S. If you want to watch videos that address the growth-maximizing size of government, click herehereherehere, and here.

P.P.S. Surprisingly, the case for smaller government is bolstered by research from generally left-leaning international bureaucracies such as the OECDWorld BankECB, and IMF.

The Brief History of Artificial Intelligence: The World Has Changed Fast—What Might Be Next? By Dr. Max Roser

 

To see what the future might look like it is often helpful to study our history. This is what I will do in this article. I retrace the brief history of computers and artificial intelligence to see what we can expect for the future.

How Did We Get Here?

How rapidly the world has changed becomes clear by how even quite recent computer technology feels ancient to us today. Mobile phones in the ‘90s were big bricks with tiny green displays. Two decades before that the main storage for computers was punch cards.

In a short period computers evolved so quickly and became such an integral part of our daily lives that it is easy to forget how recent this technology is. The first digital computers were only invented about eight decades ago, as the timeline shows.

history of artificial intelligence computer timelineSince the early days of this history, some computer scientists have strived to make machines as intelligent as humans. The next timeline shows some of the notable artificial intelligence systems and describes what they were capable of.

The first system I mention is the Theseus. It was built by Claude Shannon in 1950 and was a remote-controlled mouse that was able to find its way out of a labyrinth and could remember its course. In seven decades the abilities of artificial intelligence have come a long way.

history of artificial intelligence computer timeline

Language and Image Recognition Capabilities of AI Systems Are Now Comparable to Those of Humans

The language and image recognition capabilities of AI systems have developed very rapidly.

The chart shows how we got here by zooming into the last two decades of AI development. The plotted data stems from a number of tests in which human and AI performance were evaluated in five different domains, from handwriting recognition to language understanding.

Within each of the five domains the initial performance of the AI system is set to -100, and human performance in these tests is used as a baseline that is set to zero. This means that when the model’s performance crosses the zero line is when the AI system scored more points in the relevant test than the humans who did in the same test.

Just 10 years ago, no machine could reliably provide language or image recognition at a human level. But, as the chart shows, AI systems have become steadily more capable and are now beating humans in tests in all these domains.

Outside of these standardized tests the performance of these AIs is mixed. In some real-world cases these systems are still performing much worse than humans. On the other hand, some implementations of such AI systems are already so cheap that they are available on the phone in your pocket: image recognition categorizes your photos and speech recognition transcribes what you dictate.

From Image Recognition to Image Generation

The previous chart showed the rapid advances in the perceptive abilities of artificial intelligence. AI systems have also become much more capable of generating images.

This series of nine images shows the development over the last nine years. None of the people in these images exist; all of them were generated by an AI system.

The series begins with an image from 2014 in the top left, a primitive image of a pixelated face in black and white. As the first image in the second row shows, just three years later AI systems were already able to generate images that were hard to differentiate from a photograph.

In recent years, the capability of AI systems has become much more impressive still. While the early systems focused on generating images of faces, these newer models broadened their capabilities to text-to-image generation based on almost any prompt. The image in the bottom right shows that even the most challenging prompts—such as “A Pomeranian is sitting on the King’s throne wearing a crown. Two tiger soldiers are standing next to the throne”—are turned into photorealistic images within seconds.

Language Recognition and Production Is Developing Fast

Just as striking as the advances of image-generating AIs is the rapid development of systems that parse and respond to human language.

Shown in the image are examples from an AI system developed by Google called PaLM. In these six examples, the system was asked to explain six different jokes. I find the explanation in the bottom right particularly remarkable: the AI explains an anti-joke that is specifically meant to confuse the listener.

AIs that produce language have entered our world in many ways over the last few years. Emails get auto-completed, massive amounts of online texts get translated, videos get automatically transcribed, school children use language models to do their homework, reports get auto-generated, and media outlets publish AI-generated journalism.

AI systems are not yet able to produce long, coherent texts. In the future, we will see whether the recent developments will slow down—or even end—or whether we will one day read a bestselling novel written by an AI.

Where We Are Now: AI Is Here

These rapid advances in AI capabilities have made it possible to use machines in a wide range of new domains:

When you book a flight, it is often an artificial intelligence, and no longer a human, that decides what you pay. When you get to the airport, it is an AI system that monitors what you do at the airport. And once you are on the plane, an AI system assists the pilot in flying you to your destination.

AI systems also increasingly determine whether you get a loan, are eligible for welfare, or get hired for a particular job. Increasingly they help determine who gets released from jail.

Several governments are purchasing autonomous weapons systems for warfare, and some are using AI systems for surveillance and oppression.

AI systems help to program the software you use and translate the texts you read. Virtual assistants, operated by speech recognition, have entered many households over the last decade. Now self-driving cars are becoming a reality.

In the last few years, AI systems helped to make progress on some of the hardest problems in science.

Large AIs called recommender systems determine what you see on social media, which products are shown to you in online shops, and what gets recommended to you on YouTube. Increasingly they are not just recommending the media we consume, but based on their capacity to generate images and texts, they are also creating the media we consume.

Artificial intelligence is no longer a technology of the future; AI is here, and much of what is reality now would have looked like sci-fi just recently. It is a technology that already impacts all of us, and the list above includes just a few of its many applications.

The wide range of listed applications makes clear that this is a very general technology that can be used by people for some extremely good goals—and some extraordinarily bad ones, too. For such ‘dual use technologies’, it is important that all of us develop an understanding of what is happening and how we want the technology to be used.

Just two decades ago the world was very different. What might AI technology be capable of in the future?

What Is Next?

The AI systems that we just considered are the result of decades of steady advances in AI technology.

The big chart below brings this history over the last eight decades into perspective. It is based on the dataset produced by Jaime Sevilla and colleagues.

Each small circle in this chart represents one AI system. The circle’s position on the horizontal axis indicates when the AI system was built, and its position on the vertical axis shows the amount of computation that was used to train the particular AI system.

Training computation is measured in floating point operations, or FLOP for short. One FLOP is equivalent to one addition, subtraction, multiplication, or division of two decimal numbers.

All AI systems that rely on machine learning need to be trained, and in these systems training computation is one of the three fundamental factors that are driving the capabilities of the system. The other two factors are the algorithms and the input data used for the training. The visualization shows that as training computation has increased, AI systems have become more and more powerful.

The timeline goes back to the 1940s, the very beginning of electronic computers. The first shown AI system is ‘Theseus’, Claude Shannon’s robotic mouse from 1950 that I mentioned at the beginning. Towards the other end of the timeline you find AI systems like DALL-E and PaLM, whose abilities to produce photorealistic images and interpret and generate language we have just seen. They are among the AI systems that used the largest amount of training computation to date.

The training computation is plotted on a logarithmic scale, so that from each grid-line to the next it shows a 100-fold increase. This long-run perspective shows a continuous increase. For the first six decades, training computation increased in line with Moore’s Law, doubling roughly every 20 months. Since about 2010 this exponential growth has sped up further, to a doubling time of just about 6 months. That is an astonishingly fast rate of growth.

The fast doubling times have accrued to large increases. PaLM’s training computation was 2.5 billion petaFLOP, more than 5 million times larger than that of AlexNet, the AI with the largest training computation just 10 years earlier.

Scale-up was already exponential and has sped up substantially over the past decade. What can we learn from this historical development for the future of AI?

AI researchers study these long-term trends to see what is possible in the future.

Perhaps the most widely discussed study of this kind was published by AI researcher Ajeya Cotra. She studied the increase in training computation to ask at what point in time the computation to train an AI system could match that of the human brain. The idea is that at this point the AI system would match the capabilities of a human brain. In her latest update, Cotra estimated a 50% probability that such “transformative AI” will be developed by the year 2040, less than two decades from now.

In a related article, I discuss what transformative AI would mean for the world. In short, the idea is that such an AI system would be powerful enough to bring the world into a ‘qualitatively different future’. It could lead to a change at the scale of the two earlier major transformations in human history, the agricultural and industrial revolutions. It would certainly represent the most important global change in our lifetimes.

Cotra’s work is particularly relevant in this context as she based her forecast on the kind of historical long-run trend of training computation that we just studied. But it is worth noting that other forecasters who rely on different considerations arrive at broadly similar conclusions. As I show in my article on AI timelines, many AI experts believe that there is a real chance that human-level artificial intelligence will be developed within the next decades, and some believe that it will exist much sooner.

Building a Public Resource to Enable the Necessary Public Conversation

Computers and artificial intelligence have changed our world immensely, but we are still at the early stages of this history. Because this technology feels so familiar, it is easy to forget that all of these technologies that we interact with are very recent innovations, and that most profound changes are yet to come.

Artificial intelligence has already changed what we see, what we know, and what we do. And this is despite the fact that this technology has had only a brief history.

There are no signs that these trends are hitting any limits anytime soon. To the contrary, particularly over the course of the last decade, the fundamental trends have accelerated: investments in AI technology have rapidly increased, and the doubling time of training computation has shortened to just six months.

All major technological innovations lead to a range of positive and negative consequences. This is already true of artificial intelligence. As this technology becomes more and more powerful, we should expect its impact to become greater still.

Because of the importance of AI, we should all be able to form an opinion on where this technology is heading and to understand how this development is changing our world. For this purpose, we are building a repository of AI-related metrics, which you can find on OurWorldinData.org/artificial-intelligence.

We are still in the early stages of this history and much of what will become possible is yet to come. A technological development as powerful as this should be at the center of our attention. Little might be as important for how the future of our world—and the future of our lives—will play out.

Acknowledgements: I would like to thank my colleagues Natasha Ahuja, Daniel Bachler, Julia Broden, Charlie Giattino, Bastian Herre, Edouard Mathieu, and Ike Saunders for their helpful comments to drafts of this essay and their contributions in preparing the visualizations.

This article was originally published on Our World in Data and has been republished here under a Creative Commons license. Read the original article

Image Credit: DeepMind / Unsplash