×

Irina Conboy on Academic Publishing

This interview focuses on the real meaning of journal impact factor.

Share







Conboy InterviewConboy Interview

Today, we are talking with Dr. Irina Conboy, Editor-in-Chief of the journal Rejuvenation Research about impact and other aspects of journal publishing. This journal is an important academic publication for our field and publishes many papers focused on aging and rejuvenation research.

Hi, Irina, and thanks for taking the time to talk to us. Could you tell us a little bit about yourself and the journal?

Yes. Hi, Steve. Thanks so much for this interview opportunity. During my work, I wear several hats as well. I am professor of bioengineering at the University of California Berkeley, and I’m part of the Quantitative Biosciences Consortium (QB3) between UC, Berkeley, San Francisco, and Santa Cruz.

I’m also the editor-in-chief of the Journal Rejuvenation Research. As of now, I have just completed my first year in this position.

Yes, we definitely have quite a bit in common there, and I can definitely sympathize as a fellow editor-in-chief about how busy it can be as well. Speaking of which, what does your typical day look like as the editor-in-chief?

First of all, I would like to mention that recently, there have been a flurry of journals in the field of aging and rejuvenation, which is a very promising development, of course, because it means that there is a lot of interest in this scientific area. I’m sure as a fellow editor-in-chief, you realize that as well, because your news outlet is dedicated to the same area of science and medicine.

I would like to mention Geroscience, which is an up-and-coming journal in our field and also Aging journal, which came after Aging Cell. In my opinion, it really is doing a great job, though I may be a bit biased because I’m on the editorial board. But for the Geroscience journal, we published a few papers there after reading some very interesting papers from this journal.

ADVERTISEMENT

Eterna is a clothing company with a focus on longevity.

My typical day as Editor-in-Chief actually consists of brainstorming about how I can improve the impact of submitted manuscripts to the journal. I’m asking my colleagues in our scientific field to please send me your research papers or even reviews and perspectives.

Our goal is to be at least on par with the Geroscience and Aging journals and perhaps even supersede them at some point in the quality of rejuvenation research as a journal.

It is, of course, good news that there are more journals now dedicated to the subject.

I’m certainly noticing that there seem to be more papers about aging, in particular, those that are more focused on the rejuvenation side of things and actually doing something about aging rather than just talking about it for scientific curiosity. It would be interesting to track how many papers and mainstream media articles there have been for the past years to see if it really is rising. Do you feel they have increased?

In my opinion, absolutely. Yes, it percolates now to mainstream journals and newspapers all the time. The popular highlights are distributed widely and frequently, but it does not mean that the scientific quality of rejuvenation research has been improved in every single publication that is highlighted.

This is a good segue to the next part and what we are looking for in rejuvenation research with respect to submissions, which is a big part of my work as Editor-in-Chief. We are rebranding the journal, with the goal of publishing innovative science and technology, moving away from the hype and silver bullets towards the hardcore science that has translational promise of additional decades of healthy life.

Is there reputable data which indicates that is possible or is it only a prediction? What does the data suggest? Basically, we want to focus on the actual science rather than flashy titles and abstracts that highlight what is currently popular and sounds exciting.

ADVERTISEMENT

An advertisement banner for PartiQular supplements.

Yes, as you say, let’s get back to the hardcore science. Let’s get the evidence. That’s a great approach to take, and there is a problem with hype and snake oil in the field, of course.

It is not snake oil, per se. But it is the transmission of popularized information, which at times exaggerates the positive. Rejuvenation Research journal is moving away from that and more towards publishing high-quality, well-controlled research manuscripts with accurately interpreted, not exaggerated primary data. And, of course, excellent reviews that are not only being a cheerleader for other publications, but are critically evaluating other publications as well.

That then brings rejuvenation science into the mainstream from something that is a little bit orbiting out there, though we definitely need a certain amount of cheerleading.

Yes, it’s important to try and keep it grounded.

We don’t want a situation of overpromising and underdelivering, because it doesn’t do the field any good. It’s about striking that balance. That way, people are not disappointed and then they don’t dismiss the field as nonsense, which is good. 

I also want to briefly mention Aubrey de Grey, the previous Editor-in-Chief, to say that he did a marvelous job in starting Rejuvenation Research back when “rejuvenation” was not really a mainstream or acceptable term and everybody was saying “aging” instead. He basically concluded that we know enough about aging to think about rejuvenation.

When I came on board as the new Editor-in-Chief, it was great that the journal was already established and on Pubmed.

The key thing that I decided to do is to change the trajectory, pushing Rejuvenation Research to a different orbit, making it a forefront in the mainstream of the aging field.

ADVERTISEMENT

We want to focus not on anecdotal things, like, some people ate a secret mushroom and became younger by some prediction or a testimonial. Rejuvenation Research aims to publish novel, high-impact phenomena and mechanisms that are directly or indirectly related to the aging process and approaches for attenuating, reversing and possibly even preventing age-related diseases. This scope sets us apart from other so-called specialized journals.

That is a good segue to our impact factor discussion. The best high-impact articles are generalizable. What people discover in those papers can then be repeated throughout the world in different experimental systems and broadly improves health when applied translationally.

That could be said about induced pluripotent stem cell technologies or CRISPR technologies or in the way exposure to a young systemic environment rejuvenates aged progenitor cells, which we published back in 2005. When other research groups do the same or similar studies, they not only repeat the discovery but build on it, which opens new horizons.

That’s what I call the high impact of the manuscript. My goal for Rejuvenation Research is that we publish such manuscripts. High-impact manuscripts speak for themselves, regardless of the journal where they are published.

Exactly, and of course, journals are not without their problems. So, let’s talk about the elephant in the room, journal impact factor. 

For those unfamiliar with this, journal impact factor (JIF), which was launched about 40 years ago and has shaped academic behavior ever since. Impact factor is used to gauge the relative importance of a journal and to also measure the frequency with which the “average article” in a journal has been cited. However, impact is not without its controversy and issues, as you will see.

Firstly, alongside research and literature reviews, which is typically citable content, journals often also publish things like letters to the editor, editorials, news, and other kinds of similar content. These types of articles tend to increase journal citation due to some scholars choosing to cite them. This then inflates the impact factor score.

Do you agree that academic bloat from non-research articles is a problem? And if so, what are the kinds of ways that we can try to minimize this issue in the journal?

Yes, absolutely. The perceived impact factor reflects mainly one parameter: citation number that is assigned to the publications of the journal, and it can be artificially inflated. A way to inflate it is the one that you mentioned, which is to publish bits and pieces of information that you know will be cited often.

Another one is to unfortunately redirect the future citations towards the same journal and further increase the impact factor. For example, many people find it easier to cite something that has been highlighted in the popular press rather than reading the specialized papers that give a foundation to what was published in the so-called high-impact journals.

As a result, we have this circle where the articles published in high-impact journals cite other articles published in high-impact journals, which sways the information stream and impact from the foundation of discovery through redirection of scientific citations.

Another example is that often people cite reviews when they want to make a point, and within that review, there are one or two actual papers that may, and not exactly, reflect what they want to say in their reference. It’s easier to just cite the whole review on the topic. That’s another example of redirection of information or swaying the impact factor.

Yes, everything tends to get buried. I’ve seen people citing reviews, which can have a hundred or more referenced papers in them, as a way of supporting their case. I usually ask which part of the 8,000 or more word review am I supposed to be looking at here?

Exactly. I know that there is a lot of disappointment in the scientific community that we give awards, promotions, and tenures based on something that we know to be less than optimal.

Indeed, it’s a real problem. So, another problem with impact factor is typically, not always, but usually based on a two-year citation window. Some people suggest that this really isn’t enough time for a published paper to gain enough traction.

For example, the average biomedical research paper gathers up to 50% of its citations somewhere around the eight-year mark. Not two years. Or even longer, depending on the field. I believe for sociology and or psychology that can be even longer, maybe 10 years. So this short two-year window really doesn’t give an accurate reflection of the journal’s true impact.

Clarivate actually does an annual journal citation report in which they don’t just publish the journal impact factor on the current two-year method, but they also use their own in-house calculation based on a five-year timescale. But unfortunately most journals do continue to stick to this two-year window. How can we potentially address this problem?

I noticed firsthand as the Editor-in-Chief of Rejuvenation Research, which started with a low so-called impact factor of around four; it discouraged the submission of excellent manuscripts to the journal.

Right now, our editorial board is brilliant. For example, we have Tom Rando and Judy Campisi who are household names in the field, but because the so-called impact factor of a journal makes a difference in people’s careers, they will not submit high-impact studies to Rejuvenation Research. Even if those studies are about rejuvenation, they try to publish them in Nature, Cell or Science journals instead. Even though Rejuvenation Research is the appropriate journal, and even if they’re rejected by those other journals, they will still expend resources and efforts to publish there.

In terms of two years being not enough time, absolutely, because then people who reference certain papers right away in two years might overlook the implications of the work, the connections of the work to previously published literature. It’s only later on when those connections emerge, and these could be positive or negative.

So, even with a rigorous scientific approach, a two year publication scoring kind of sways it again subjectively towards the journals which already have a high impact score because people tend to browse through these and the popular highlights on these.

“Have you read about this work in Cell, Science, or Nature that is highlighted by Times Magazine?” people say. And of course that will be the first thing that they’ll start referencing. Meanwhile, there could be a paper that really gives a foundation for that Cell, Science, or Nature paper that was published a few of months before but it was not appropriately cited and is deprived from subsequent citations because everybody will keep citing the one in the higher-impact journal.

That is unfortunately not an infrequent phenomenon; deciding what to cite in their papers, authors sometimes seem to exclude the so-called specialized work that is published the same year or the year before so that what they’re doing appears more novel.

It takes a field more than two years, maybe five or six years to reconcile all these points and to say, “Hey, wait a minute, the same key discovery was already on PubMed the year before you published your work”. Meaning there was actually less innovation than suggested and eventually the citation stream might start to change years later to more accurately reflect this.

A two-year citation score might be too soon, and at the same time, it reflects the general phenomenon of redirecting citations because once 50 people have cited a recent Nature paper, 500 other people might be influenced to do the same.

Very few of us actually read every single paper in depth, look at every single figure, and then do a search on PubMed to see what was already known in the field. Some of us do, but not all of us.

I don’t think anybody’s got the time to read it all, really, unless they’re an AI, perhaps, because AI is becoming more sophisticated and it’s great at analyzing lots of bulk data.

I know they’ve been experimenting with AI to look at research papers in the past to find connections that were not made by humans because they can’t be everywhere and read everything. I know that AI is often hyped up, but it might actually be used in a way to try and mitigate this problem.

I totally agree. I think AI could be used to mitigate this problem, but there are also journal clubs which can also help. Most of the laboratories of the world have journal clubs where they look through a list of so-called high-impact journal papers that were recently published, and together they work a bit like an AI.

We are a collective of natural intelligences, and through that we do identify technical demerits, scientific demerits, lack of innovation and the positive things too. Unfortunately, that takes place at a very specialized level and often remains there, not spreading to the broader community.

My goal as the Editor-in-Chief is to see what we can do to shift focus from the impact of the journal to the impact factor of the individual publications, because then, the impact can be assessed in a couple of years.

Yes, it does seem strange, and it’s clear at least from my perspective that it has a lot of problems.

It’s also the opinion of maybe the maximum of four people that decide on whether a paper will be published or not. Four out of thousands of scientists in the field.

That means that the paper was not really scrutinized by the scientific community or by all of the experts. It is just the opinion of the handling editors whether they want to send it for review. That is the first threshold: how much that editor understands the field, a lot, a little bit, or not at all. When a paper is in peer review, the reviewers can have non-obvious biases or conflicts of interest.

These are not the obvious things, like you are not from the same school as the author or you do not have collaborations with an author, but do you dislike that person? Did they ask you a tough question when you presented a scientific talk? Do you dislike that person because what they’re publishing goes against what you published and have funding for? Those are non-obvious conflicts of interest. There are 3-4 of these reviewers, which decide if the paper is published or not, and even one bad recommendation can block the paper, at times, through scientifically erroneous critique.

I also think there’s another problem with just publication in general, and that is the reporting of negative experimental results.

I’ve written about this in the past, that researchers or certainly many I’ve encountered are often reticent to publish anything negative. That’s a huge problem, because it can lead to repetition of experiments and people going down the same wrong path. One thing I did suggest, as a bit of a joke but also slightly serious, is to have a journal that doesn’t publish anything but failures. We could call it “Failures, the Journal”.

I was always taught that you can learn as much from failure as you can from success, but if the system is geared up to punish failure, then something has to be done about the system because it’s not helping overall progress in the field. It means people are going to repeat the same mistakes, they’re going to waste time and money doing the same experiments when the information is already known but not published.

Yeah, that’s a good thought, because I mentioned in some of my editorial pieces for Rejuvenation Research that we do welcome work that did not produce a confirmation of hypotheses. We don’t even call them failures, by the way. There is a hypothesis and a null hypothesis, and they should be given equal weight.

Unfortunately, many people try to pull the weight towards confirming their hypothesis, but showing that your hypothesis and perhaps the one shared by others is incorrect and instead that the null hypothesis seems to be true is super important.

We welcome such submissions at Rejuvenation Research, as they are like the canary in the coal mine. We published one of the Conboy Laboratory papers in Rejuvenation Research about the large false positives in one of the commonly used methods of comparative proteomics: how to find these false positives, and how not to make conclusions erroneous.

We did not get any referencing of that paper at all for the whole year, which for the Conboy group is unusual. Yet, at the same time, we published other pieces of scientific work, which were referenced hundreds of times.

This could be because it shows that many of the previously published manuscripts were not entirely accurate and the field should perhaps do more control experiments going forward. Or, it could be because we published in Rejuvenation Research and it was just overlooked based on the venue: the scientific journal worth referencing.

Going forward, I would like to emphasize that we always welcome the negative findings that you call failures. We don’t consider them to be failures, we consider them successes. They are successful in warning people against doing research in the wrong direction or doing research that is not rigorous because certain controls were not included.

The way that one finds that something is not working really is by doing many more control setups and those controls: both positive and negative. If one then does the comparisons and discovers that what a paradigm turns out not to be accurate, this would be an example of superb research and excellent publication.

This is an important point, that there should be a difference between the impact of a journal and the impact of an individual paper. Many excellent scientists, including Nobel Laureate Professor Randy Scheckman, share this opinion.

There have been several papers published in high-impact journals and then later retracted by the journal in part because the researchers really made it their goal to publish in Nature or Science. For example, the paper on STAP cells, the pluripotent cells that were claimed to be induced by acid treatment, was retracted.

If researchers actually did the controlled experiments that reviewers told them to do and discovered that there was no such cell, which could be pluripotent simply because it is exposed to an acidic environment, it would be publishable in a so-called specialized journal. But instead, the stressful pressure to publish in a high-impact journal was so high that they fabricated the results.

That’s the problem, isn’t it? Because of that motivation, research can be sensationalized just like some newspapers try to grab attention using hype and sensationalized headlines.

Regarding the word failure, as you say, I think that is the wrong word. I was sort of paraphrasing what other people say. They say failure though I do not see it as failure either as we can learn from all results.

Now I’m a fan of an artist from the seventies, early eighties, called Bob Ross. You may remember Bob Ross.

Oh, yes, absolutely.

He was a wonderful artist and very peaceful, and I always liked the way he described if you’d failed or made a mistake. He said “We don’t make mistakes, we have happy accidents”. I think that’s a fantastic way of framing it. Perhaps we should have a Journal of Happy Accidents. 

Haha! I don’t think there should be such a journal, but there should be a collection in every journal dedicated to the negative findings and critical reviews of findings and discoveries. Does the research need more controls? What is the true innovation of the research? Or are researchers repeating something that was published in 1996 but with new instruments and technologies?

Or, perhaps, there was an honest mistake? It could be that the researchers simply overlooked the possibility of doing an additional set of experiments and once they did them, it turns out that their conclusion does not generalize.

Another issue that I and my colleagues encountered is that if you try to publish so-called negative data that suggests something that was previously published underestimated or overestimated something, it is super difficult to pass by the editors.

Possibly, because after a peer review by a couple of scientists, there is an impression of an established paradigm, and it is physiologically unpleasant to admit that there was a mistake.

As a result, there is a certain pressure on science to move in the direction of a dominant paradigm and a counter-pressure on the new data questioning it.

Yes, it’s a massive problem. I’m beginning to have one of those “tip of the iceberg” moments where the more you explore, the bigger you find out the iceberg is and before you know it, your Titanic is metaphorically steaming towards it, which is a problem.

I would say that this problem is not only true of our field but almost certainly true of most if not all other scientific fields. In fact, I’d be amazed if there was one scientific field that didn’t have this same issue because it’s a problem that’s inherent with the journal system and not limited to a single area of science. I really don’t know how to fix it, but it is definitely a major concern.

Yes, and at Rejuvenation Research, we will try to fix it. In our journal we will not have that block for publishing experiments that started out to prove that A was true and then found it’s actually B and the researchers then understood why A wasn’t correct. We welcome such submissions. I think it should not be a separate journal, but I think it should be a legitimate part of any scientific publication.

Yes, definitely, and just in what we’ve covered today, it shows you how many sorts of hurdles and pitfalls there are. It really plays into personal biases, which is something we all have.

As journalists, we often encounter biases, and part of being a good journalist is to acknowledge that no matter what we do, we all have personal bias of some kind. In accepting that and acknowledging it, we can do our best to keep it in check and report fairly and with this in mind.

I don’t think you can ever really completely remove biases because relationships are complicated. Friendships, rivalries, pet hypotheses can all color our judgment. I’ve seen it in our field, and it definitely has an impact. This could even potentially mean if one researcher on a journal editorial board, who doesn’t like another researcher who is trying to publish a paper, could potentially try to strike their paper down.

You are absolutely right, it is biased, but I don’t think it is hatred. I think it’s just a subjective thing that we all want to be correct.

It kind of tags onto the financial part, but very indirectly, and you are right. What I think is important is that we cannot get rid of the biases, but we can dilute them. If we are making decisions about the impact of a paper, then it should be by many people in a field that make this decision, not just three, four individuals. People always discuss scientific papers after the fact that they’re published but are, at times, timid in providing a negative opinion.

Some think that the main problem is that peer reviewing is single-side blinded. If you express negative opinions about someone’s science, those same people could be reviewing your papers and grant proposals.

For our readers who might not be familiar with how journal review works, single-blind peer review is the traditional method of reviewing papers. This means that the reviewers know the identity of the authors, but authors don’t know the identity of reviewers. This can lead to problems of bias, both intentional and unintentional.

If it could be done anonymously and could involve the opinions of hundreds of people, this would help to dilute personal biases. Which would then help to make more accurate decisions about the impact of individual papers.

I think that will make a big difference because it is less subjective. I potentially see it happening in the future, but it will be super difficult to do because it goes against what scientists have been doing traditionally.

It touches on your point that you cannot get rid of the biases, positive or negative. My solution would be we can dilute them by increasing the number of judges, so to speak. Like in Olympic figure skating, in the past, there were few judges and 0-6 scoring, but it was a very small panel, and there were some scandals. So, they increased the number of people who are judging and also changed the scoring system to make it larger.

Yes, which means the larger cohort you have, the less outliers are going to influence that data strongly and skew the figures. That touches upon decentralized science, or DeSci, which is something that Lifespan.io is very involved in.

Currently, in the traditional grant giving systems, the promising moonshot projects that have a potentially high risk of failure are being turned away from funding in favor of less-ambitious projects that have a higher chance of a positive publication.

That ties into risk aversion in journal publication. So, DeSci actually uses a large collective group of people to evaluate and determine which projects get funding. This creates these granting opportunities that are disconnected from traditional systems with their risk-aversion problems and helps ambitious projects avoid languishing without funds.

Because they are directly controlled by the community, it means that the kinds of experiments that they would like to see are actually getting funded through this system of decentralized science. In other words, Desci is helping to democratize science and break out of the rut it is currently in.

DeSci could also potentially play a role in the future of how journals are. Perhaps there could even be a DeSci journal.

Absolutely. You are totally right that high-innovation, paradigm-shifting science is often overlooked and funding agencies realize that as well. For the evaluation of scientific papers, I don’t think that it should be a journal’s obligation. It should be, I think, an initiative where journals publish after several years, the impact factor of individual research.

Or, perhaps there could be a community of people like Lifespan.io that can publish the impact of individual papers. Maybe something to think about as a collaboration. We need to understand and evaluate what was published, not focusing mostly on where it was published.

For that, you have a perfect platform already, I don’t think it’ll be really costly or require anything else except for the tabulation of professional opinion. So, all of us are the judges, and once a paper is published, it’s fair game to critically evaluate it both positively and negatively. That information should help establish the impact of the work outside of its publication venue.

It almost sounds like film reviewing on the website Rotten Tomatoes, where there is an audience and critic rating.

Yes, I always look at Rotten Tomatoes to decide whether to watch movies or not, so absolutely. And it’s more or less accurate, right?

Yeah, it doesn’t make any sense at all. If you had that collective reviewing and confirmation, you could have that core of professionals reviewing it, but then other professionals who are not on the review board but are confirmed could also collaborate by rating the paper as it were almost like how Rotten Tomatoes works.

That would be your canary in the coal mine as well. Because if the review panel gives it like 30%, but the audience who are actually consuming the paper gives it 90%, then you know something might not be right.

Yes that happens on Rotten Tomatoes all the time, both ways. Sometimes movie critics give it like 90%, and the audience has this overturned bucket of popcorn icon giving it a 30% score. In such a metaphor, thousands of scientists can determine that a paper is strong or weak, adding to the opinion of a couple of peer-reviewers who like it.

That will allow us, instead of reading everything that is done, redone and published on Pubmed, to use the judgment of the collective scientific community. As a collection of human beings, we currently supersede artificial intelligence, in my view.

The DeSci movement uses active community participation to fund and operate large-scale projects but also potentially could apply to reviewing things. I could easily see this being applied to journal clubs to facilitate the evaluation and sharing of information.

You could even use AI as a third layer to evaluate papers and review them. So, you would have your professional reviewers, the professional public, and AI. Doing that would potentially help to truly assess the impact of a paper and steer away from where it was published being the most important factor. Of course, the tools to do this already exist today if such a system was something we wanted to develop.

Yes, absolutely, and AI could be instrumental there because it could synchronize the diverse opinions and languages into something that is a scoring system. This could really help to shift the focus from the venue where the content is published to what is being published.

Yes, something like that would be really useful.

The final point about journal impact factor I want to touch upon is that a small number of highly popular papers, which get cited a lot, can also give a false impression of how popular a journal is. A journal’s impact factor is based on an average number of citations that papers have attracted in a two year period. which we talked about earlier. 

This means that these highly popular papers are basically outliers that can skew the calculation of what that average is. What can we do about these sort of disproportionately popular papers?

What can we do about that? I don’t think there is much. I think it again goes back to this foundation of basing the quality of the paper and the quality of the scientist on where it was published. At times, there are controversies and papers come out later on, to say that previous work is not generalizable or not reproducible to other systems of methods of analysis.

So, everything that you said is accurate, but I don’t think it can be solved trying to treat symptom after symptom. We all say that the system is not optimal, but we keep using it.

Yes, It’s almost like being in an abusive relationship. You know it’s bad for you. You know you shouldn’t be involved with it, but yet you keep coming back for more.

I think it is a great comparison. Journal impact factor is like an abusive relationship that academia has with itself. Why are we doing this to ourselves? Is this because there is no alternative? I think the remedy is to find an alternative using metrics, which people agree are fair, for ranking the impact factor of individual publications.

Then, if a journal publishes 10 high-impact papers that are ranked highly by the critics of the world in a year, then yes, it’s a good journal and others will try to submit to it. In summary, I don’t think we should pay supreme attention to the venue where a paper is published.

Yes, it’s a problem for sure, but hopefully with all these new tools and the Desci movement, we can replace what is broken. No point in trying to fix a system that does not work, as you suggest; better to sink our energy into making something better to replace it.

On that final note, I would like to thank you for taking the time to talk with us today and for sharing your thoughts and insights on the journal system.

Lastly and importantly, to make sure that there is no misconception, Cell, Science, Nature, etc. are excellent journals published and continue to publish groundbreaking research that is justly considered to be on top of scientific fields. The goal is to diversify and optimize the stream of scientific information in the arena of aging and rejuvenation, have more ease in publishing negative data and paradigm shifts, and do not equate the value of a paper with the impact of a journal.

To do this, we need your support. Your charitable contribution tranforms into rejuvenation research, news, shows, and more. Will you help?
About the author

Steve Hill

Steve serves on the LEAF Board of Directors and is the Editor in Chief, coordinating the daily news articles and social media content of the organization. He is an active journalist in the aging research and biotechnology field and has to date written over 600 articles on the topic, interviewed over 100 of the leading researchers in the field, hosted livestream events focused on aging, as well as attending various medical industry conferences. His work has been featured in H+ magazine, Psychology Today, Singularity Weblog, Standpoint Magazine, Swiss Monthly, Keep me Prime, and New Economy Magazine. Steve is one of three recipients of the 2020 H+ Innovator Award and shares this honour with Mirko Ranieri – Google AR and Dinorah Delfin – Immortalists Magazine. The H+ Innovator Award looks into our community and acknowledges ideas and projects that encourage social change, achieve scientific accomplishments, technological advances, philosophical and intellectual visions, author unique narratives, build fascinating artistic ventures, and develop products that bridge gaps and help us to achieve transhumanist goals. Steve has a background in project management and administration which has helped him to build a united team for effective fundraising and content creation, while his additional knowledge of biology and statistical data analysis allows him to carefully assess and coordinate the scientific groups involved in the project.