Elections can sometimes be quirky. Two of the past three presidents in the United States lost the popular vote, thanks to our electoral college system. In state and local elections the winner is sometimes determined by just a handful of votes, or even a coin toss.
Elections also typically have binary outcomes, where one side wins and the other loses. When a high-stakes binary outcome meets a close election and a little weirdness (or fraud), you can have history-making events driven effectively by chance.
Thus we have President Donald Trump. And Brexit.
One way to reduce the impact of electoral quirkiness is to lower the stakes of each individual election. We don't mind when a congressman is elected by a dozen votes, or a small-town mayor is chosen by flipping a coin, because the stakes are usually lower. But when we have a historically bad president chosen through the confluence of a few thousand votes in the right states plus an outdated electoral college system, it seems like there's too much at stake for such an oddball system.
Electing the Attorney General of the United States would help reduce the stakes of presidential elections, and lower the consequences of random happenstance.
Historically, of course, the Attorney General is appointed by the President and confirmed by the Senate. But many states have elected Attorneys General, and it seems to work just fine.
This would require a constitutional amendment, so changing to an elected AG is not going to be an easy path. But I can see a lot of benefits:
It separates the functions of running the federal bureaucracy from enforcing the laws. This lets the two sides of the executive branch serve as checks on each other.
It creates another nationally elected office, which reduces the impact of the electoral college without having to abolish it--the power dynamics of the electoral college and the constitutional amendment process mean that it's almost impossible to actually get rid of the electoral college. The United States Attorney General can be popularly elected without having to touch the system for electing presidents.
It makes it impossible for a President to stop an investigation by firing the Attorney General. This specific tactic has been tried twice in my lifetime, and is an obvious weakness in our system of checks and balances.
If you want to take this idea to an extreme, you can divide up the powers of the Executive Branch among several elected cabinet officials, but it's important to balance this against the need to have a reasonably unified approach to policy and administration.
Unfortunately, the truth is much less exciting: There's a lot of interesting experiments going on and some new approaches to developing fusion reactors. But we are still a very long way--at least 30 years--from actually putting a meaningful amount of fusion power on the grid.
At present, the most advanced fusion experiments still require a lot more energy to power the reactor than the fusion reaction generates. So talking about commercially feasible fusion power at this stage is a little like talking about building steam engines before we've figured out how to make fire.
The road to fusion power has a number of mile-markers, none of which has been achieved yet:
An experiment has to produce more energy from fusion than it takes to power the experiment. (In 2013 the NIF claimed to have achieved this, but they ignored the substantial inefficiency of the lasers used to power the experiment. So most people still say that we haven't yet passed this milestone in any meaningful sense.)
An experiment has to be sustained for a significant amount of time with positive net energy production. (So far all fusion experiments with meaningful energy output have lasted only a short time, nowhere close to one minute, much less the weeks to months required for practical power production.)
An experiment has to capture the energy produced in a usable form for power generation. (I am not aware of any actual experiments to test ways to do this.)
A pilot plant has to produce power over a period of months to years.
A commercial test plant has to produce power at a cost that's close enough to other energy sources that it could be competitive with further development and mass production.
Then--and only then--are we truly within a few years of actually powering the grid with fusion. The final milestone is to deploy multiple commercial scale reactors and put power on the grid at a competitive cost over the lifetime of the plant.
It is possible that a major breakthrough could get past the first few milestones in just a few years. It's also possible we could spend another half century stuck on trying to get to energy break-even.
Or even with a major breakthrough, it could turn out that building a commercial-scale plant is much too expensive for the amount of power produced and the technology goes the way of hydrogen powered cars.
Fusion power is definitely worth further research. If we can ever figure out how to make it work, it is plausible that fusion can be a practical and economic power source without many of the drawbacks of nuclear fission reactors.
But it turns out that fusion power is a really hard problem to solve. Progress has been very slow, and while recent developments are exciting, it's still at the very earliest stages. Anyone who claims fusion is just around the corner is either drinking a lot of kool-aid, highly misinformed about the state of the technology, or being intentionally misleading.
For the past six months or so I've owned a Palette+ filament splicer for my 3D printers, and a few weeks ago I received my Prusa Multimaterial Upgrade. This has given me a unique chance to compare two different approaches to 3D printing with multiple materials in a single print.
The TLDR Summary
The Palette+ works by splicing together segments of filament in exactly the right lengths to change materials at exactly the right point in the print. The Prusa MMU2 achieves the same end goal by stopping the print, unloading one filament, and loading another each time it needs to change materials.
Both approaches work, but neither is perfectly reliable. Both the Palette+ and MMU2 are slower than a single material print and will waste some material. Often they are a lot slower and waste a considerable amount of material. The Palette+ is faster than the MMU2, but the MMU2 produces less waste.
Between the two, my experience is that the MMU2 is somewhat more reliable than the Palette+, and it's less expensive; however, the soon-to-be-released Palette 2 should improve the reliability. The MMU2 only works with a Prusa MK3 printer, while the Palette+ can be used with almost any hobby-level 3D printer.
If you want to do multimaterial prints and you already own a Prusa MK3, then the MMU2 is the better choice. If you already own some other 3D printer, then the Palette is the only choice. If you don't currently own a 3D printer and want to buy one for multimaterial printing, my choice would be to buy a Prusa MK3 and upgrade to the MMU2. But whatever you choose to buy, you should be proficient in 3D printing in a single material before you try to tackle multimaterial.
3D printing is cool technology. But most hobby-level printers are limited to printing with one kind and color of plastic at a time. There are some tricks you can use to do very limited multi-color prints, such as printing a few layers then pausing to change to a different filament. But that's a nuisance and can only really do "2.5D" objects that are more bas-relief than anything else. Multimaterial printing gives you the ability to make truly multicolored things without painting or post-processing. And to be completely honest and transparent, one of the reasons I like 3D printing is because I don't have the patience to learn how to paint or sculpt well.
Beyond just colors, multimaterial printing also holds the promise of making more functional things by combining different kinds of materials in a single print. For example, you can combine flexible and rigid materials to make prints that can move and flex in ways that might be difficult or impossible using any traditional manufacturing method. And you can print your supports in a dissolvable material for easy cleanup of even the most intricate support structures. For the hobbyist, multimaterial printing is a next-level capability.
The Urge to Purge
There are disadvantages to multimaterial printing, too. A multimaterial printing process has a lot more complexity than 3D printing in a single material, and that means more things that can go wrong, and more tweaking and tuning to get it exactly right. If you have reached the point where 90% of your 3D prints work on the first try, it may be very frustrating to take the plunge into multimaterial and find your failure rate shoot way up.
Even when everything is working perfectly, multimaterial printing requires a lot more time and material. Every time you change material, the printer needs to extrude a bunch of waste plastic to ensure a crisp transition to the new filament; this waste mostly gets put into a "purge tower" which you throw away at the end of the print--though you can also use some of it for infill or supports and use some other tricks to reduce the amount of waste. (If you have a printer with multiple extruders then this purge isn't necessary, but multiple extruder designs have their own problems--see below.)
Fortunately, with both the Palette+ and the MMU2 you can easily print in single material mode, so your performance for single material prints will be unchanged. From my perspective, the added capabilities of a multimaterial print far outweigh the time and waste.
The Palette Way
The Palette+ works by splicing carefully measured segments of filament together, so that the splice should be extruded while the printer is printing the purge tower. The Palette software inserts pauses into specific points in the gcode, which the splicer will detect during the print so it can stay in sync with the progress of the print.
This system of synchronization is probably the weakest part of the Palette+ design. There's a long filament path between the splicer and the extruder, and small errors in tracking the print progress can lead to the splice hitting the extruder early or late, causing very visible stripes of the wrong color in the final print. Even after six months of tweaking and tuning my Palette+, large prints (ones with more than 500 or so material changes) will almost always have at least one visible defect.
A major revision to the Palette was recently announced, the Palette 2, which will address these problems by making the filament path shorter and using a new control unit, the Canvas Hub, to keep the splicer synchronized with the print.
The other major shortcoming of the Palette is the fact that it relies on splicing together segments of filament. I've found that while the splices usually hold, I will occasionally have a splice inexplicably break in the middle of a print. When this happens there's no way to recover, you have to start the print over from the beginning. You're also limited to combining materials which will fuse together well: I've had good success combining PLA and TPU in a single print, but some material combinations won't be possible.
The Prusa Way
Prusa's new Multimaterial Upgrade 2 for the Prusa MK3 takes a completely different approach. The MMU2 sits on top of the printer and is effectively an automatic filament loader/unloader. When the printer needs to change materials, the print pauses while the MMU2 retracts the old filament, then loads the new filament in the extruder. The printer does some purge to ensure a clean transition, but less purge than the Palette, because most of the old material was removed in the unload process.
This process is slow: it takes over a minute to go through the complete unload/load/purge cycle, while the Palette+ takes 30-45 seconds to finish changing materials. But there are no splices to break, which gives more flexibility in the kinds of material you can combine in a single print. And the MMU2 never gets out of sync with the print, so you never get stripes of the wrong color.
The MMU2 is brand new, and as of this writing has only been shipping for a few weeks. There are some "barely out of beta" problems with the early units, but my experience is that my prints with the MMU2 are as reliable, maybe more reliable, than the best I was able to achieve with the Palette+. I expect that as the bugs get fixed the MMU2 is likely to become much more reliable than the Palette+.
Probably the biggest drawback to the MMU2 is that it only works with the Prusa MK3 printer. The MK3 is an outstanding printer, perhaps the best in its class, but not everyone has or wants an MK3. The other drawback to the MMU2 is that it's only available as a kit which you have to assemble, whereas the Palette is fully assembled and (mostly) ready to go.
Why Not Multiple Extruders?
There's several 3D printers available with two, three, or four extruders. I've never owned a multi-extruder printer, but based on what I've heard from people who do own them I'm not a fan of this approach.
Putting multiple extruders on a 3D printer adds a lot of cost and complexity, and in the end the results are often mediocre. Getting good print quality requires aligning all the print nozzles with a high degree of precision. Even if you succeed, it's likely that the idle extruders will ooze plastic and drag it around unless you have some mechanism to physically move them out of the way--and that means even more cost and complexity.
The people I know who own multiple-extruder printers have mostly given up on getting it to work well, and just use their printers in single extruder mode.
That said, there are advantages to multiple extruders. Perhaps the biggest is that using multiple extruders eliminates the need to purge when changing materials, so you don't have the same cost in time and wasted materials for doing multimaterial prints.
Which is Better?
Speed:Palette+, which takes about half the time to change materials as the MMU2.
Waste Material:MMU2 wastes about half the material as the Palette+.
Reliability: Slight edge to the MMU2. Both suffer print defects and failures too often to be true workhorses, but I think the MMU2 approach is inherently more reliable since it doesn't require splicing two pieces of filament together. The Palette 2 should address some of the worst problems with the Palette+, but the MMU2 should also improve a lot in the coming months as the early bugs get worked out.
Capability: MMU2, because the MMU2 can accept five different materials to the Palette's four, and the MMU can print with combinations of materials that don't fuse easily or at all.
Initial Setup: Palette+, which requires only a little calibration. The MMU2 is only available as a kit, and requires major surgery on your printer.
Compatibility: Palette+ works with almost any 3D printer, while the MMU2 only works with Prusa printers.
Software and Toolchain: MMU2. The Palette+ requires an extra step to post-process your gcode, and the Chroma software for post-processing tends to give mysterious "out of memory" errors on my Mac. Mosaic Manufacturing, makers of the Palette, are coming out with their own cloud-based slicer which will avoid the post-processing, but I'm skeptical that it will ever be as capable as the more mature slicers (plus if they ever go out of business the cloud-based slicer won't be available). Slic3r, the free slicer supported by Prusa, has extensive MMU support, and other slicers are starting to support it, too.
Support: Tie. Both Prusa and Mosaic have excellent support and online communities, though the Prusa forums are considerably larger and more active.
Cost: MMU2. Even though the price of the Palette 2 will be much less than the Palette+, the MMU is even less expensive.
Multimaterial printing is definitely still varsity-level 3D printing, no matter what equipment you buy. I think it's worth the effort for the extra capabilities, but if you're not willing to invest the time to really learn how to use your tools you may find it frustrating.
If you're trying to decide what to buy, I have a very simple decision tree:
If you already own a Prusa MK3: Buy the MMU2. It's cheaper, more reliable, and more capable.
If you already own some other printer and don't want to buy a new one: Buy the new Palette 2. Also buy the Canvas Hub, which is technically optional but realistically a requirement for most users if you want good results.
If you are looking to buy a new 3D printer with multimaterial capability: Buy a Prusa MK3 and the MMU2, unless there's some must-have capability you need that Prusa doesn't offer (like a larger print volume).
Geoengineering, or large-scale modification of the Earth's environment, is a contentious topic among people debating the right response to climate change.
On the one hand, some people believe that efforts to reduce greenhouse emissions are probably going to be too little and too late to prevent major changes to the Earth's climate with substantial impacts to human activities. This group thinks the only way to preserve the current climate is to begin large-scale projects to actively counteract human greenhouse gas emissions. These people are probably right.
On the other hand, other people believe that countering greenhouse gas emissions with active geoengineering is a poor solution to the long-term climate problem. It would be a band-aid at best, temporarily covering up the underlying problem with a short-term solution with unknown side-effects. Worse, it would relieve the urgency to find a real solution, so if we ever stopped geoengineering it could lead to even larger and faster changes in the global climate. These people are probably also right.
As I see it, what both groups are missing is the fact that we humans are already geoengineering. We're just doing it in the stupidest possible way, without any goals or planning and only the vaguest understanding of the consequences of our actions.
So the real argument isn't over whether we should geoengineer the Earth's climate. The time to have that debate was 50-100 years ago, when scientists first began to understand that burning enough fossil fuels would lead to a warming planet. But at the time the problem didn't seem real or urgent, and nobody was paying attention.
Now that we are firmly established on the path of modifying our climate on a global scale, the debates need to be over how and to what ends we are going to engineer the Earth. And these debates won't be easy.
To begin with, it's not clear what the objectives of geoengineering should be. Preventing short-term catastrophe is a good start. But beyond that, there's an implicit assumption on all sides of the climate debate that the goal should be preserving (or restoring) the status quo. I don't think it's that simple, though: we may have changed things too much already for a return to the status quo to be a viable goal.
What's more, there's other possible goals for a geoengineering program which might be even better than just returning to the status quo:
We may want to have the ability to reduce the impact of major natural disasters. For example, every few hundred years there's a volcanic eruption big enough to cool the Earth for a few years and cause crop failures, famine, and suffering. It may be within our reach to mitigate this.
We may want to stabilize the Earth's natural climate changes over long periods of time. If we have the ability to prevent another ice age, do we want to use it? (Some scientists think we've already done this, just not intentionally.)
There will be some winners to the current global warming. Miami isn't going to fare so well, but here in Minneapolis we may appreciate our warmer winters and longer growing seasons. Perhaps there's a way to keep things a little warmer up here but save coastal cities from flooding.
There's a number of geoengineering schemes which have been proposed, but most of them seem a little harebrained. It's true that seeding the stratosphere with particulates or spraying saltwater in the air over oceans seem like plausible ways to cool the climate relatively inexpensively. But we don't really know how well they will work, what the side-effects will be, and what the distribution of regional and global changes will be. Some of these ideas might even backfire.
What's needed is an actual engineering approach to geoengineering. We need to be testing and evaluating the effectiveness, costs, benefits, and side-effects of different ideas. We need to have the difficult political debate about the goals that we, as a species, want to accomplish with our geoengineering efforts. And in the end, we need to develop a set of tools for both short-term and long-term management of the Earth's climate so that we can control the level of greenhouse gasses in the atmosphere in a way that's going to achieve our objectives.
As I've written before, I'm cautiously optimistic that we will somehow muddle through. It won't be easy to get to the point where we're properly engineering our climate for the long-term benefit of humanity and the rest of the species on the planet. It may take a century or longer before the technological and political pieces are all in place, and in the meanwhile there will probably be some major disruptions.
But we really have no choice, since the alternative--just letting things take their course--isn't a recipe for long-term success or survival.
The Post-Trump Era of American politics will be on us sooner than we think. Six years and change (at the longest) is not that long in the grand scheme of things, and there's plenty of scenarios where the Post-Trump Era could begin much sooner than that.
Depending on how the end of the Trump presidency plays out, it's entirely possible that the country will be in the mood for significant reforms to our political system. It's been a surprisingly long time since that's happened: on average, we amended the U.S. Constitution on average about once every 12 years between the Bill of Rights and the end of the Nixon administration. In the past 45 years, however, there's been only one amendment ratified, and that one was originally proposed in 1789. So we're long overdue for some tweaks.
I've been thinking lately about what sorts of reforms would make sense, given all that's happened since the ratification of the 26th amendment (which gave 18-year-olds the right to vote) in 1971. There's no lack of proposals out there, with the most common one being a constitutional amendment to overturn the Supreme Court decision in Citizens United. But I'd like to give some serious thought to what sorts of reforms might actually improve the way our system works, and also have the kind of nonpartisan and broad appeal that make for a reform which will be widely accepted.
My first idea is a small one, requiring presidential candidates (and perhaps all candidates for federal office) to disclose their personal finances in detail. Since Nixon, all presidential candidates have done this voluntarily until Donald Trump. Until 2016 the tradition of releasing a candidate's tax forms was so firmly established that I think some people believed it already was a legal requirement.
This sort of financial disclosure is important because it shows where a candidate could have conflicts of interest, and it also demonstrated a candidate's willingness to put service to the country ahead of personal enrichment--both areas where our current president could use some improvement. It's not going to fix a lot of problems by itself, but it would at least raise the level of transparency and make it harder for an officeholder to engage in blatantly corrupt behavior.
But while this reform is small, it would also be easy to enact. Each state has its own rules about who is eligible to be on the ballot, and if just a few states (maybe even just one state) began requiring candidates to release their tax forms in order to get on the ballot for President or Vice President, then it would immediately become a de-facto requirement for the major party candidates. Neither party wants its candidate for President left off the ballot anywhere, and releasing your tax forms in California (or Vermont, or South Dakota) is as good as releasing them everywhere.
We seem to be living in an age filled with wildly hyped technologies that have nearly zero chance of succeeding in any meaningful way.
I'll be the first to admit that my record of picking technology winners and losers is spotty at best. I've gotten some things right (solar power), and some things wrong (I was skeptical about WiFi for a long time). But the current crop of technology hype seems to have an unusually rich streak of fundamentally flawed ideas which, nevertheless, are attracting substantial amounts of money and attention.
When I last wrote about Bitcoin I cheekily began my description of Bitcoin with, "for anyone reading this years in the future when Bitcoin has disappeared." Well, it's now years in the future and it's safe to say that Bitcoin is still very much around. My thesis then was that Bitcoin would never achieve success as a replacement for actual money because it solves the wrong problem.
Looking back, I think I can call that an accurate prediction: very few people are still claiming that Bitcoin will ever take the place of dollars or Euros anymore. Instead, thoughtful Bitcoin enthusiasts have moved on to talking about blockchain, the technology underlying bitcoin, as the thing that will change the world. I think there's some merit to this, since blockchain solves some interesting and tricky problems, but so far most of the real world applications I've seen for blockchain don't seem to have many advantages over traditional solutions.
But it's still Bitcoin that gets most of the hype and attention, especially after its spectacular 10x runup in value in 2017. And Bitcoin has proven to be insanely unscalable (one estimate claims that the energy to process a single Bitcoin transaction would power a typical American house for days or weeks) at the same time it doesn't seem to have many advantages over more traditional financial instruments.
Flying cars have been a staple of science fiction for almost as long as there's been science fiction, but every attempt to build one has been a massive commercial failure. For good reason: it turns out that a flying car is neither a very good airplane nor a very good car. But the idea keeps coming back, and the 2018 version is the fully autonomous flying drone-taxi. I've read articles about several such projects, but the one Airbus is working on is perhaps the most credible given Airbus' experience making actual airplanes.
The concept is that we can use autonomous flying drones to get above the street level congestion in big cities. But if you think about this for even a moment it's obvious that it's not practical to have large numbers of aircraft like this in the air at any moment in a given downtown area. Aside from the obvious safety considerations--drones will need to have hundreds of feet of vertical and horizontal separation to avoid wake turbulence, and flying between buildings is just a dumb idea--there's the fact that these thing are loud. Inevitably, intolerably loud. Nobody has ever built a powered heavier-than-air aircraft that could be called anything close to quiet, and multirotor VTOL drones are among the worst.
You've probably experienced the noise from someone flying a toy drone around a park. Now imagine a similar drone that's a thousand times heavier and probably a thousand times louder (at least). Now imagine a sky filled with hundreds of them. Now you know why it will never happen. At most there might be a handful of flying drone-taxis to carry the top 1% of the top 1% above the common folk. But that's hardly a revolution.
Hyperloop is the transportation concept so wacky that even Elon Musk originally took a pass on actually building it. But it's still around, and more than one company is still trying to develop the idea. Tellingly, after many years there's still no meaningful prototype, and no reason to believe that the costs will be even remotely competitive to existing technology.
And yet...the hype marches on. The latest, according to a February 2018 article in Wired, is that Musk is back in the game and wants to have the first working Hyperloop line in service by 2020. At least we won't have to wait long to see whether than target will be met.
There's lots of other examples. Fusion energy is back in the news (it's only 20 years away, and always will be). Robocars have made great technical strides in the past few years even as the business case seems as murky as ever. And who can forget Theranos, the massively hyped medical testing company which raised the better part of a billion dollars and turned out to be a straight-up fraud.
Having lived through the dot-com bubble, this moment feels very different to me. The Internet was obviously a big deal, and while there was a lot of silliness in the air, it was also clear that big changes were coming even if we didn't yet know exactly what would change.
Today it seems more like there's lots of very speculative money chasing a wide range of crazy ideas with no underlying theme other than the desire to find the Next Big Thing. And it could well be that there is no Next Big Thing on the near-term horizon: one Internet revolution is all we get for a while. Not that there won't be new ideas and big companies, but I'm skeptical that robocars, blockchain, and the rest of the current crop of "revolutions" will have the same impact on our daily lives as the Internet. Even ridesharing, arguably the most impactful recent development, isn't much more than a better way to deliver the same taxi services we had before.
My new Prusa 3D printer kit is supposed to arrive next week, about three months after I preordered it.
During that long wait I've been learning some of the software used in the open source printer world, including Slic3r, the slicer that's been customized to work with the Prusa printers. Fortunately, TierTime recently opened up their hobby printers to accept gcode from other software, allowing me to experiment with using Slic3r with my existing printers.
(For those not familiar with 3D printing lingo, the "slicer" is the program which takes a 3D model and turns it into gcode instructions for the printer, sort of like a print driver in the 2D printing world. "Gcode" is a nearly-universal format for the printing instructions.)
One huge advantage of stepping into the open source printing world is I now have access to tools and accessories I couldn't use before. A case in point is the Palette+, a filament splicer for making prints with multiple colors and materials.
I bought a Palette+ to give me more options for multicolor printing than just the Multi-Material Upgrade by Prusa. There have been some customers reporting significant problems with Prusa's older model of MMU, and it seems like it's still very experimental. One nice thing about the Palette+ is that it can be used with more than one printer, so I've been experimenting with getting it to work with my Cetus.
And after a couple weeks, and convincing the Palette's manufacturer that they really should support the TierTime printers, it works. Thanks to the magic of open standards, I was able to add multicolor capability to my old single-extruder printer.
Getting the Palette+ to work properly took more effort than it should have, mostly because TierTime has some nonstandard stuff in their gcode processor. That's a good argument for why standards should be, well, standard.
It's been almost exactly six years since I bought my first 3D printer. In that time I've owned three 3D printers, all of which still work, and two of which I still own and use fairly constantly.
To date all of my printers have been fully-built models from the Chinese manufacturer TierTime. I've decided that it's time to take the next step and build my own printer.
So I've placed a preorder for a Prusa I3 MK3 kit, which I hope to receive around the beginning of February. I've also preordered the multi-material upgrade, which might show up around April.
With my six years of experience 3D printing, I think it's fair to call myself at least a highly competent journeyman. But I'm already learning that the open source world does some things very differently from what I'm used to in TierTime's products.
For example, TierTime's slicer provides only a handful of print settings: layer height, infill, print quality (one of three options), whether you want a raft, and a few parameters for the amount of support material. Slic3r Prusa Edition has around 65 different print settings, not counting the ones under the "Advanced" menu. This clearly represents not just a steeper learning curve, but an entirely different philosophy of how 3D printing should work from the user perspective. While I can see the value of the extra control, I've also managed to get by just fine so far with one tenth the number of parameters to adjust.
Another big, and surprising, difference is how the RepRap world still seems to be struggling with support and rafts. Six years ago, the Up's break-away supports were a major point in their favor (and one of the reasons I didn't go with something like a Makerbot back in 2011). While it's not always perfect, the support material I print with my current printers is generally fairly easy to remove, and the resulting surface of the print after the support is removed usually ranges from pretty good to perfect. But from reading online discussion, it seems that a lot of people still struggle with getting their printers to print supports that are easy to remove and don't leave ugly surfaces behind. I expected the open source community would have figured this out by now--and it's disappointing because not having reliable support and rafts really does limit what you can print and how you can print it.
On the other hand, the limited controls TierTime has given me for the materials (I can only set the extruder and bed temperature on my Up, vs. 15 different material parameters in Slic3r) has limited some of my printing options. I haven't been able to get some interesting filaments (like flexible filament) to work well, or even at all. And being able to print with four different plastics in a single print, as the multi-material upgrade allows, will be a real treat. Even if one of those materials winds up being soluble support because of the problems with break-away supports.
I'm sure I've only just begun to scratch the surface. Being an experienced 3D printer taking my first steps into the world of open source printers is guaranteed to be an interesting adventure.
This fact, plus the unique characteristics of solar and wind power, means that over the next few decades pure economic forces are likely to flip power markets upside down and lead to a glut of energy.
Solar and Wind Will Be Overbuilt
Solar and wind power are different from traditional sources of electricity because:
Nearly all the cost is upfront capital expenditure, and there is no cost savings in curtailing overproduction (vs. coal or gas plants, where the cost of fuel is significant and reducing output when demand is low will save money).
Power output is variable and can't be increased to match demand.
The lifetime cost of power from a solar or wind facility is cheaper than any other power source, and solar and wind are getting cheaper over time.
This combination of factors means that when a power company needs to add generating capacity (whether because of demand growth or because older plants are being retired), it's generally going to be cheaper to build renewables rather than a coal, gas, or nuclear plant. And because of the low cost and variable output of solar and wind, it will be cheaper to overbuild renewable capacity by some percentage, to allow the low cost renewable power to displace more of the (relatively) expensive coal and gas power even when the renewables aren't producing full power.
Once the solar and wind generation capacity is in place, the direct cost of generating power from these sources is very close to zero. The most rational, profit-maximizing approach to building future power generation is one which will inevitably lead to times when more power is being produced than consumed, at zero marginal cost to the utility. If the utility can find any buyer for this power at any price larger than zero, it can make a profit.
In other words, a glut.
(On a side note, utilities overbuild their capacity in traditional power plants, too, so that there's enough power for peak demand or when a power plant goes offline. But this peaking and reserve generation capacity sits idle until it's actually needed, since the fuel costs money. Solar and wind are unique in that they don't cost anything to generate once the plant is built.)
There have already been a few times when wholesale electricity prices have dropped to zero in certain places because of the overproduction of renewable energy. The world is only just getting started in building out the 21st century renewable energy grid, and before we're done, excess power production will be a daily occurrence in many parts of the world.
Demand Response and New Uses
The idea of an energy glut is all very weird to me. I grew up in the 70's and 80's in the shadow of energy crises, high gas prices, and 55 MPH speed limits.
One likely change is that we'll see a lot power use shift to times when there's excess electricity. Even though electricity needs to be generated at the time it's used (unless someone stores it in a battery--which is getting cheaper, but will always be more expensive than using it when generated), it turns out that many uses for electricity don't have to happen at a particular time. Heating and cooling is probably the best example, since heat (or cool) is easy to store for a few hours and a significant fraction of electricity use goes to climate control.
It's easy to imagine a smart thermostat that notices when the price of electricity is low and cranks the thermostat a few degrees (warmer or cooler, depending on where you live) so it doesn't need to run as much the rest of the day. Or a smart water heater that takes advantage of cheap power to heat up some extra hot water in the tank.
The technical term for this is Demand Response, and it's already starting to become a thing. In a few years it's likely to become a really big thing, especially in commercial and industrial applications where user can shift large amounts of consumption and realize substantial cost savings.
It's also going to be interesting to see what new uses for electricity become important. At today's prices, for example, electric cars are more expensive than gasoline but cheaper to drive--over the lifetime of the vehicle, the electric car is still somewhat more expensive. But that might change if you could recharge your EV at one-tenth the retail price of electricity as long as you did it when there's a glut of electricity.
The hard problem in renewable power has been and continues to be, seasonal storage. Batteries can store electricity for a few hours or weeks, but even the cheapest batteries are still many times the cost of generating the power when needed (though this is starting to change: in some electricity markets it is now cheaper to use large batteries to meet peak power demand than to use expensive "peaking" plants powered by natural gas).
In many places renewable power production varies considerably not just throughout the day, but over the course of the year. In Minnesota, on average, we get only around a tenth as much solar power in the darkest month of the year as in the sunniest. It's not uncommon for us to go weeks without seeing the sun in November and December. Batteries just don't have the ability to store electricity from June to use in November.
One exciting possibility for the coming energy glut is that it may enable solutions to the seasonal storage problem. If electricity is cheap enough and plentiful enough during the gluts, it may actually become economical to do things like synthesize liquid fuel using renewable energy. These ideas are being pursued in research labs, but in today's energy markets they are much too expensive to be worthwhile.
The World Is Changing
It seems almost inevitable that, if current trends continue (and there seems to be no obvious reason why they shouldn't), we will find ourselves with intermittent gluts of energy rather than shortages. This is going to be a very different world than the one we live in today, where the challenges will not be in finding enough energy, but in getting the energy to the times and places where it's needed.
I've been neglecting this blog more than a little the past few years. Got busy, life was getting complicated, and so forth. And after a while I got to experience the joy of overwhelming technical debt firsthand, when the version of Drupal I was running became so out of date that it was hard to keep running and a big project to upgrade.
But I finally got around to updating, leapfrogging from Drupal 6 to Drupal 8. I didn't take the time to do much customization: I only did enough to get my basic content moved over and put together a completely vanilla blog site. I'm not perfectly happy with where it stands, but considering the amount of work I didn't do to get here, it's not too bad.
With luck, having an updated and maintainable blog will encourage me to write more often again. Reading through some of my old articles has been interesting. And with a few tweaks here and there, I should be able to gradually get some of the layout and display features closer to what I want.
A lot has changed in the almost three year hiatus this blog has taken. Kids are leaving the nest, business is evolving, and dear God don't get me started on politics.
I write this blog mainly for myself, as way to express my thoughts and ideas. I don't expect anyone has been terribly disappointed, or even noticed, that it hasn't been updated. Nor do I expect anyone will notice or care if I write again. But I care, and perhaps some of the breadcrumbs I leave here might help someone just a little bit down the road.