A giant impact outside our Solar System

I’ve come down to listen to a General Interest seminar about climate change given by retired physics professor, but I’ve discovered it’s next week. I’ll have to wait to find out if it satisfies the stereotype. Since there is no point in going back up to my office before my next meeting, I thought I would mention a paper of ours that has got quite a lot of press coverage.

The paper is about Kepler-107, an exoplanetary system already known to host 4 planets. We collected a lot of high-precision spectra which allowed us to determine the radial velocity of the host star and, consequently, estimate the masses of the planets. The observations were made using an instrument on the Telescopio Nazionale Galileo, a telescope operated by the Italian National Institute for Astrophysics (INAF), researchers from which led the paper. I did an extra radial velocity analysis and carried out some dynamical simulations to test the stability of the planetary system.

The figure shows one frame from the middle of a hydrodynamical simulation of a high-speed head-on collision between two 10 Earth-mass planets. The temperature range of the material is represented by four colours grey, orange, yellow and red, where grey is the coolest and red is the hottest. Such collisions eject a large amount of the silicate mantle material leaving a high-iron content, high-density remnant planet similar to the observed characteristics of Kepler-107c. (Credit: Zoe M. Leinhardt and Thomas Denman, University of Bristol)

The reason that this is an interesting system is that the two inner planets (Kepler-107b and Kepler-107c) have very similar radii, but very different masses. Kepler-107c is more than twice the mass of Kepler-107b. If Kepler-107c were the innermost planet, then you could explain this through it being born in an environment that is heavily bathed in radiation from the central star. Since it orbits beyond Kepler-107b, this explanation seems implausible. What is most likely is that Kepler-107c underwent some kind of giant impact that stripped part of the mantle, leaving behind a dense (probably iron) core that now makes up 70% of its mass. Kepler-107b, on the other hand, has a composition more like that of the Earth, with the core making up around 30% of its mass.

There is evidence for giant impacts in our Solar System; we think that the Moon formed through a collision between the Earth and a Mars-sized body. However, we think this is the first convincing evidence for a giant planetary impact occuring outside our Solar System.

Advertisements
This entry was posted in Personal, physicists, Research, The scientific method and tagged , , , , . Bookmark the permalink.

32 Responses to A giant impact outside our Solar System

  1. The Very Reverend Jebediah Hypotenuse says:

    Fascinating.

    These events are rare – but then, the universe is very old.

    As the space.com article mentions, planet-scale collisions probably happened at least three times in the early Solar system: Earth-Moon, Mercury, and Uranus.

    This is a dangerous place.

    Thanks for sharing!

  2. The Very Reverend Jebediah Hypotenuse says:

    …the universe is very old.

    And really, really big.

  3. bobdroege says:

    And we can’t see the half of it.

  4. Andrew J Dodds says:

    The Very Reverend – plus Pluto and other KBOs.

    It seems that major collisions are the rule rather than the exception, in planetary formation.

  5. Jon Kirwan says:

    My first book on anything even slightly related, read by me many decades ago (and first published in 1960), was “An Introduction to Celestial Mechanics,” by Theodore E Sterne. As I read through here, just about the ONLY thing I kept wishing for (and knowing would NOT be in the published paper) is the applied mathematical treatments you applied and the sequence of moving through a process starting at 1st principles and then successively explaining the residuals one at a time to get where you got. (And what unexplained residuals still remain.)

    I did get a copy of your paper and will start into reading through it, tomorrow. (I will always read everything you write about here that you were involved in… it’s the least I can do for all the wonderful effort you’ve given to the rest of us.) I appreciate the heads-up! Thanks!

  6. Jon,
    Thanks. If you want more details, this paper explains how you can use stellar radial velocities to infer the masses of the orbiting planets.

  7. Marco says:

    ATTP, it looks like that retired physics professor is not going to be as stereotypical as you imagine he could be: https://doi.org/10.1017/RDC.2016.70

  8. Marco,
    Indeed, it may well be perfectly reasonable. It was the suggestion that we still need to disentangle natural and anthropogenic influences today that had me slightly worried. My concern may, of course, be unfounded.

  9. dikranmarsupial says:

    “Hope for the best and prepare for the worst”? ;o)

  10. Marco says:

    ATTP, I read/interpreted that part slightly differently, but sometimes I can be a bit naïve…

  11. Jon Kirwan says:

    Marco, I’ve not been following these side conversations (unrelated to this blog’s topic about another solar system.) But the link you provided has this in its abstract, “The study of temperature variations before human influence may help to eventually disentangle natural and anthropogenic causes for the global warming of our time.”

    I want to draw your attention to the phrase, “… BEFORE HUMAN INFLUENCE …” That’s a highly loaded assumption (or set of implied assumptions.) One of the implied assertions is that the little ice age was global. So far as I’m aware, that’s not yet been shown and there remains significant debate about it. In fact, the TAR (third annual report) on which I volunteered for a time as a reviewer, specifically stated a conclusion at the time suggesting quite the opposite and suggesting that at most, minor cooling of the Northern Hemisphere happened. (The results of the IPCC TAR, memory serving, were published in 2001.) Second, it completely ignores another set of arriving work which I’ve been curious about, as well. William Ruddiman has two “guest posts” on real climate about this: http://www.realclimate.org/index.php/archives/2011/04/an-emerging-view-on-early-land-use/ and http://www.realclimate.org/index.php/archives/2018/10/pre-industrial-anthropogenic-co2-emissions-how-large/ and he’s also written a book called “Earth Transformed” that discusses in very readable format his ideas about the impacts of humans (so called “influences”) that may have occurred over quite a period of time. WIlliam Ruddiman’s theories aren’t merely “theories,” as they are backed up by extensive and very difficult work performed over decades in various areas around the globe and the results appear to support the ideas he’s proposed.

    So right at the start, in the “abstract,” I’ve already got a problem which I suspect isn’t dealt with in the paper. I’ll read it to see, soon. But I’m pretty sure, given the way that the abstract is written, that the paper may not have been written from a comprehensive view of the existing body of work on the topic.

  12. Jon Kirwan says:

    ATTP, Thanks for the reply and the link. I got a copy of it and I’m definitely going to read through it. Of course, while that may help me with the theories applied, it will NOT help me with the datasets used nor will I be able to work on calculating down to the final unexplained residuals in your case from a paper on theory. I’ll need datasets for the rest and I’d love to compare (to test my own understanding, not to test your work) what I develop with what your team produced. (Yes, I know that one silly hobbyist trying to replicate the work of a team of well-educated professionals is almost silly from the start, but every long journey starts with the first step and I’d like to at least try my hand to see what happens. Plus… well… I can ask you if I have a problem.. hehe.)

    Anyway, thanks again so much for all you do here.

  13. Jon,
    The new data we used is at the end of our paper. Most of the other data you can get from this paper. However, these are low-mass planets with quite small signals.

    If you really want to try and extract a planet signal from radial velocity data, I would start with something like this one. This is for 51 Pegasi b, which has quite a big RV signature, so should be easy enough to find in the data.

  14. Jon Kirwan says:

    I really like your suggestion regarding 51 Pegasi b and I’m sure it is the better approach for someone at my level. Thanks!! That’s how I’ll start. (I enjoy sitting down with my son and working through problems like this with him and he enjoys learning, as well, with me. So this is really good stuff.) So I think you really offered me a good way to start out and much thanks for that.

    Still, it’s not just about taking an easy problem and seeing things work well. Part of the fun is also taking a harder problem and seeing how many different mental tools need to be applied to finally get a satisfying result.

    For example, it’s really simple to understand the basic theory of a pendulum (especially when you assume the swing is through a small angle so that you can greatly simplify the math.) And it is satisfying, at that level, when you build one and find that you get results which are close. But when you set a classroom to building pendulums and provide them with really good means for accurate timing, you find that the results cannot any longer be explained. The theory gets you close, but because of the more accurate timing involved, you find that the theoretical vs measurement errors exceed allowable known measurement error and you have to start looking for another cause. In this case, it will likely be the diameter of the hole different students used when making their pendulum. Some of the holes are larger, some smaller, and the pin “rocks” in that curve generating the variations in results which cannot otherwise me explained. So you learn something new from this process and develop a more complex, but more broadly applicable theory (and of course now find you have to control for, or take measurement of, more variables, too.)

    It’s that kind of process I’m looking for. Not just the simpler cases which are explained with more prosaic theory. But the more complex cases which teach me new things I’ve not thought about, before.

    You are exactly right that I should start more simply and thanks for that. But I’m not only looking for 1st order knowledge. I’d like cases which force me to uncover new ideas that have to be applied, too. That’s how I really gain broader and deeper understanding. It’s a long process. But fun.

  15. Dave_Geologist says:

    Jon, from what I’ve read (and links from the more recent Ruddiman article), the LIA was global but also (a) smaller than contrarians make out (b) most severe in the Northern Hemisphere (= North Atlantic because there’s not much land in the North Pacific) and (c) at least half explainable by deterministic natural forcings. As you’re no doubt aware, Ruddiman and others attribute the other half to human influences (or rather, anti-influences: depopulation of the Americas reversing some of his Early Anthropogenic warming). I tend to favour Ruddiman’s line, although I did post a link on RC to a 2016 paper suggesting it was natural, a land-vegetation positive (reinforcing) feedback. Ah, I see you also commented there, but I’ll include the link in case others are interested. I found it interesting because it showed rapid onset and they claimed to tie it to high northern latitudes where tipping points might be expected, e.g. changes from seasonal to year-round freezing, or to never getting above some metabolic threshold temperature, even in summer. One weakness is that they had to assume no change in the oceans, which are a large source of their COS tracer.

    Low atmospheric CO2 levels during the Little Ice Age due to cooling-induced terrestrial uptake (non-paywalled ms.)

    Here’s another (non-Ruddiman) one on the depopulation mechanism (open access). Earth system impacts of the European arrival and Great Dying in the Americas after 1492.

    Accounting for carbon cycle feedbacks plus LUC outside the Americas gives a total 5 ppm CO2 additional uptake into the land surface in the 1500s compared to the 1400s, 47-67% of the atmospheric CO2 decline. Furthermore, we show that the global carbon budget of the 1500s cannot be balanced until large-scale vegetation regeneration in the Americas is included. The Great Dying of the Indigenous Peoples of the Americas resulted in a human-driven global impact on the Earth System in the two centuries prior to the Industrial Revolution.

  16. Jon Kirwan says:

    Dave, thanks so much for this. I will definitely follow through by reading and considering the links you’ve offered me.

    I certainly enjoyed reading Ruddiman’s book, partly because I hadn’t considered the idea before. But partly because I could see just how much field work had also gone into the idea. I respect hard work put together with novel thinking. (Good ideas are a dime a dozen, so to speak. But when you can convince others to also put in time and effort to follow up, then that’s something.)

    (Even if later it turns out to fall for some other good reasons, I will then also learn from those results as well.)

    I also enjoyed the book because it was, quite simply, clear and well-presented (much better than many popularized presentations.) I also had a chance to review a video that was made at one of his presentations. That included some difficult questions and well-made criticism from the audience. In the end, I still found the ideas intriguing and I’m glad I was exposed to them.

    I’m pretty sure, regardless of how it developed into the future, that Ruddiman had no impact on the IPCC TAR conclusions about the little ice age (since his efforts hadn’t been considered in them, so far as I can recall,) But I haven’t yet gone through the subsequent AR4 and AR5 reports to see if these conclusions changed in any significant way, since.

    Again, thanks a lot for taking a moment to provide something to add. (I can access articles behind the usual Elsevier paywalls, so don’t worry about providing freely accessible links unless you are also being considerate to others when you write.)

    By the way, I really enjoy reading your writing here. It is all thoughtful and worth a moment’s time.

  17. Jon,
    If you’re really keen to do RV analysis, you can actually download the code we used in our paper. It’s in python and you might need to install some other packages, but it’s reasonably straightforward to set up.

  18. Jon Kirwan says:

    Thanks! I’m semi-familiar with python. (I’m just a little weirded out, right now, by the idea that you used python for a purpose like this.) But it will be some time because you are exactly right in suggesting that I need to start with something where I have a better chance at understanding the basics before trying something more complex. One thing at a time and I’m very glad you thought to include something easier to try. I do need that “stepping stool” so to speak. But I’ll grab up some stuff and file it away for later. And I really appreciate the chance you’ve offered me to try something like this. Thanks again!

  19. Jon,
    Nothing wrong with python 🙂 Most of the big simulation codes are written in Fortran, or C. Python, however, is quite commonly used for things like this.

  20. Jon Kirwan says:

    FORTRAN is the main language, I think. It provides some features for multi-core CPUs that cannot be provided by C (due to the differences in “guarantees” between the two languages.) Python features many things important in programming (lambdas), but it’s never been considered much in the area of multi-core. Since we’ve reached the limits of Moore’s law and Dennard scaling with IC manufacturing, already, it seems that Python wouldn’t be seeing more use… but less, instead. But who am I to fight this? I’ll go with the flow, so to speak.

  21. Jon,
    Fortran is certainly what I the most for big numerical simulations. Python is mainly, I think, used for data analysis, or relatively simple modelling. Python does do multi-core, though.

  22. dikranmarsupial says:

    Jon Kirwan wrote “It provides some features for multi-core CPUs that cannot be provided by C (due to the differences in “guarantees” between the two languages.)”

    Interesting, I’d be interested in details if you have any. I’d be surprised if that were true for reasonably modern versions of C (C99 or later). I do a lot of work involving intensive linear algebra, and if you have any sense, you will be using libraries that are tuned to your hardware (e.g. ALTAS), and it will be the same code whether it is called from C or FORTRAN (or indeed MATLAB).

  23. Dave_Geologist says:

    Thanks for the kind words Jon.

    And it’s good to see FORTRAN getting a look in 🙂 . Didn’t someone at NASA put out something a while ago about crowdsourcing GCM modules to “replace the old, inefficient FORTRAN code”. I ignored it because I assume it was a bright idea that had been sold to some administrator who didn’t understand programming. How did he know the old modules were inefficient? It stands for Formula Translation after all, and is more optimised for number-crunching than general-purpose languages. It’s less user-friendly than modern languages, but GCMs are a write-once, run-a-gazillion-times application. And I remember being taught in the 70s to only use powers greater than three because it was more efficient to do squares and cubes by multiplication; and to use matrix algebra for geometric transformations because it avoids costly trigonometric or transcendental function calls. Old code was written to be super-efficient. Yes there were some overheads like non-dynamic array dimensioning, but that was fixed four decades ago, and is an easy fix if you have the source code because the statements will be in the first few lines*. Better DO-loops and text-handling were the big improvements in FORTRAN 77 IIRC. There probably wasn’t much text to handle in early GCM modules, but I can see where DO loops would be important in an iterative solver or time-stepper. I used computed GOTOs a lot, which was frowned upon as they’re hard to debug (you have to trace every instance that might have changed the value of the control variable). But they seem to be all the rage now! When I Googled I found some articles on their use in C and python that read like they’d discovered a prize Easter Egg. Perhaps they’ll learn that it’s a Curate’s Easter Egg 🙂 .

    I mentioned on a previous thread the hardware-specific point, as another thing that makes Auditing pointless. I had modelling software where you could specify the seed number for Monte Carlo simulation, but you got completely different results running the same software version on a Sun or an SGI. Or on the same machine if you got an OS and libraries upgrade. Of course the PDF from multiple trials looked the same, so it didn’t matter. This was all in Unixland of course, where the adage “do as little as required, but do it well” applied, and “there’s a library for that” predated “there’s an app for that”.

    * As computers got more memory, I found that one of our software flexing routines failed for arrays bigger than 1024×1024. The vendor kept insisting that it must be a hardware problem. But by chance one of our IT staff had previously worked there and he told me that it was old FORTAN IV code and that was the maximum array size specified. But they’d lost the source code and couldn’t fix it.

  24. Jon Kirwan says:

    ATTP, I hadn’t realized python is multi-core capable. I had heard of PyPy for JIT compilation, but my experience has always been as a byte-code interpreted language and none of that has been multi-core capable. It appears I need to widen the scope of my experience with it. (I worked on chipsets at Intel, some years ago — circa P II.)

  25. Jon Kirwan says:

    dikranmarsupial, I started using C in 1978 when working on the Unix v6 kernel. I probably stopped following the C standard subcommittee debates by the middle of the 1990s, so my knowledge is admittedly dated. Looking at C99, just now, I see that they finally DID add in something I remember heated debates about: “restrict.” So you are right, I think. (I need to keep up, better.) This makes a big difference for some algorithms and it used to be the primary differentiating factor for FORTRAN. But now that you’ve got my attention to go back and check to see if they added ‘restrict’ (I did, at least, remember the debates about it so it was easy to just google it), I do see that it’s been added in C99. Thanks! What remains to be seen is how well these optimizations have been implemented. (I believe that the gnu toolset includes a FORTRAN front end as well as C and C++, so it’s possible that their implementation of optimizations will be roughly equally good across these front ends. However, there may yet be significant practical differences remaining in other implementations for supercomputers. I’ll be keeping my eyes a little more open than before.

  26. Jon Kirwan says:

    Dave,
    FORTRAN has had the inside track (and therefore also given the hogs’ share of talented optimizations efforts for VLIW, supercomputers, multi-core+multi-cpu, etc. That effort has provided a well-vetted set of implemented compiler results. But as dikranmarsupial has shaken me into realizing, the C99 standard did finally add in the “restrict” keyword and this takes the most important difference (at least, conceptually) that gave FORTRAN a leg-up for highly optimizable code. Dr. Ellis’s Bulldog compiler and his fantastic efforts in demonstrating a wide range of ideas for optimizing towards VLIW is just one example basing itself on FORTRAN and not on C or C++. Even though to the mid 1990s, when I was last more heavily involved (I have written a few compilers), most of the research papers were developed using FORTRAN. For example, Mahlke et al, “Compiler Code Transformations for Superscalar-Based High-Performance Systems focused entirely on FORTRAN. (Just happened to have the paper not too far from hand. No better reason to mention it.)

    It’s still my suspicion, knowing how hard it is to get a C compiler team and/or developer to do a near-complete revamp of their optimizations (yes, it’s not easy.) There are so many other competitive details that they have to focus on. I can remember long, long debates with other compiler writers developing C and C++ compilers, where I would point out techniques known for decades (let alone more recent ideas that were interesting to me) and be told that they “just can’t.” They want to. Like me, they enjoy delving deeply into the basic block and DAG structures, register allocation, or whatever… and finding new, better ways to generate code. It’s why any of us got into the work, in the first place. But people want color coded keywords, fancy editing features, slick IDEs, etc. And this sucks the air out of “doing optimizing work” (which is hard and takes a LOT of time) for an audience that won’t even know it is there, in the end. They won’t sell their product and will soon be out of business. So optimization gets short shrift unless there are very strong “evolutionary pressures” otherwise.

    Now, if C and C++ were in the mainstream of the very highest end of computing and there weren’t already standard libraries which optimized the functions, therefore forcing the compiler itself to either shine in that area (or not), then the compiler writers might spend more time on optimization and folding into C and C++ compilers those optimizing ideas which have some time before already been placed into the mainstream of FORTRAN compilers (which are pretty much ONLY used where “really fast computation” matters more than anything else does.)

    So I need to go back and see what’s happening with C/C++. Might be good stuff. I just have a gut feeling that FORTRAN still holds an edge because it’s only purpose these days is high end computation and this will force the vendors to focus on ALL of the right details.

    My programming dates back to about 1970, though I worked on much older computers (such as one using mercury delay line tubes horizontally in racks to form a memory system.) The PDP-8, PDP-11, PDP-10, IBM System 3, HP-21xx, a small wire-wrapped custom computer I built in 1974, and then the MITS 8800 and Altair 8080 (still have both around here), leading into the VAX-11/780 was my educational period. I had to redesign badly designed 4k dynamic ram cards to make my Altair 8800 function (couldn’t afford static ram chips back then.) I worked on basic assembler for the IBM 360 and 370, used Algol-68, BASIC, FORTRAN, used operating systems such as TOPS-10, RSTS-11, RSX-11M, and eventually got to work on the Unox v6 kernel code where I first learned C. My first MITS 8800 had only 256 bytes of static RAM, to start — 1975. By the time I got the two 4k dynamic cards working, I was then able to run “paper tape BASIC” from Microsoft.

    The computing pyramid (as Steve Balmer liked to use as a prop) has grown a lot. Used to be, almost everyone working in this area had to have a strong interest in mathematics (because you had to know Chebyshev and non-linear minimax just to get a decent transcendental function implemented), a strong interest in electronics (to fix and/or keep things running or wire up something you need to hook up to get a job done), and a strong interest in programming (there weren’t many libraries, so you really had to be good at almost everything related to programming.) Today? Almost anyone can drag a button onto a form and figure out how to add code when it is clicked. So the base of the programming pyramid is very, very big today and almost everyone in society can count themselves as some kind of programmer or another.

    Things change. There’s good in that, and there’s bad. I remember a student in my class at PSU coming into office hours and telling me that she didn’t think she made the right choice to pick a degree in CS. I asked her about why she made that choice and she said, “Well, I wanted something that would be low-stress and would pay well and I thought that either computer science or accounting would be good.” My brain had to take a moment to gather up what she’d just said, because in my day people in computer science or engineering or physics (my major was physics) really “had no choice.” And in no possible way would any of them have ever, for so much as a split-second, have wondered about accounting as the next option!

  27. Dave_Geologist says:

    Thanks Jon. Very enlightening! One of the nice things on this site is that while we sometimes go off on a tangent, BANG!, a real expert pops up. As you say, a lot depends on architecture-specific features, and the quality of the compiler, so if high-end number-crunchers mostly use FORTRAN, supercomputer vendors will have noticed and responded.

    I only go back to an IBM 8600, then a PDP 11/70. And programming not compiling. I salute compiler writers! A while ago I did some Googling, and realised why I’d been at loggerheads with my boss 35 years ago. He kept telling me to make more use of virtual memory, I said “no, I’ve tried and it slows things down”, he said “you must be doing something wrong”. The machine was an upgraded PDP 11/45, and I found out the 11/70 had a faster bus to take advantage of the upgraded peripherals. I bet my employer skimped on the upgrade, leaving bottlenecks in the system.

  28. dikranmarsupial says:

    Jon, thanks for the interesting comments. Programming languages are generally a matter of “horses for courses”, and C/C++ are more optimised for systems programming rather than numerical computing, so it wouldn’t unduly surprise me if Fortran still had the edge on C for that kind of thing. I teach Java, C and C++, and one of the things I try to get across is why different programming languages are the way they are. I’ve written one useful Fortran program in my life, and that will do for the time being! ;o)

    I used to have some PDP 11 FALCON SBCs, but I never got around to doing anything with them, I wonder if I still have them somewhere…

  29. Jon Kirwan says:

    I admit I do sometimes dream of finding a wayward PDP-11/20, PDP-11/45, or PDP-11/70 (I definitely want the front panel) and bringing it home. Or an HP-2000F time-sharing system based on a pair of the HP 21xx processors. Either working or else not too crippled so I’d have a chance fixing it in my spare time. (And, of course, I’d want the KSR-35 and ASR-33’s that go along.) My wife might let me have a toy if such lightning were to strike me once. But she’d probably insist that I go build a museum if it struck more than once. 😉

    The PDP11 has 8 registers, 8 addressing modes equally applicable to those 8 registers, supports co-routines, recursive function calls, and every register can be used with stack-like addressing modes. The HP21xx has two computation registers (A and B, but the logic operations only work on A) and an M register for an address pointer. That’s it. And it can’t do recursive function calls or co-routines and hasn’t a clue what a stack is. It doesn’t even have an internal register to hold a return address. Calling a subroutine meant “blasting” the first instruction word of the subroutine with the return address and returning from the subroutine meant using a jump-indirect through that word. But I like them both for some odd reason. Perhaps because they represent such different design choices.

    Today, anyone can sit down with a cheap FPGA board and whip up some VHDL or Verilog code, do a little manual floor-planning to combine with the automatic planner, and use freely available tools to generate a working CPU of their own making for very little money.

  30. dikranmarsupial says:

    Indeed, it’d be a nice project to reconstruct my old Jupiter Ace on an FPGA, but I suspect somebody has already done it!

  31. Dave_Geologist says:

    Gosh, i’d forgotten all about FORTH!

    You could always get a Raspberry Pi, but it wouldn’t be the same, would it…

  32. dikranmarsupial says:

    A Raspberry pi emulator would be a good second! Forth was a really good language for such limited hardware.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.