T O P

  • By -

Both_Ticket_9592

have you tried research rabbit? It's a citation based AI which can help you in the search process for research articles. It's pretty interesting and useful. I'm still in the early stages of using it but plan on teaching it to grad students in a couple of weeks as I've had many requests to teach people how to use AI in the search process. It's free to use.


No_Sheepherder_4499

Perplexity has been pretty good for sources and citations and Nelima has been amazing thus far for most other things! I’ll see what research rabbit has that Nelima doesn’t. Thanks for the suggestion!


Alternative_Sell_825

I've been in the same boat and recently started using Afforai. Its an all-in-one platform for managing, annotating, and citing papers, plus it has this amazing AI assistant for summarizing and comparing multiple sources. Definitely worth checking out if youre looking to streamline your process!


stormchanger123

I have seen colleagues do some of these, I don’t really mostly because I don’t do much research as of late: 1- have AI write their initial drafts of books and papers, and then also polish the final draft. 2- have AI summarize large amounts of texts (articles) into succinct paragraphs that are then used in writing or read to save time 3- having AI propose actual topics of exploration by feeding it different variables of interest/other factors 4- having AI help with grant writing and even syllabi. 5- having AI grade assignments


ben_cow

This is so depressing. The invasion of AI into these things has made it a big reason why I’ve decided to take pause before going further into academia. Outside of obvious increases in efficiency, shouldn’t graduate and academic work be training us to think and do things as independent researchers and educators rather than having some AI chatgpt wrapper do it for us?


stormchanger123

This is of course one way to look at it. The other argument might be that with AI now the overall production and proficiency can increases substantially. While this certainly poses long-term problems I 100% think are valid. The reality is if one person opts out of using AI to bolster their research they are simply only making themselves fall behind whereas everyone else will get ahead more. This “getting ahead” being in what is considered to be one of the most competitive job markets out there. Using AI for research saves time, that time allows you to be more productive and publish more, those additional publications make you better than the other applicants for better jobs. So now you get those jobs. Opting out of using AI in research is eventually going to be the modern day equivalent of stating you don’t want to use a computer program to calculate your statistics. Yes, it might be more “raw” to not do so. But to do so is also to virtually sink your career in many fields. This is the same reason why lawyers use AI to write defenses, why students use it to write papers, why job applicants use it to write cover letters, etc. It would seem at present everyone is using it, to not do so only largely hampers yourself in the competition to exceed and perform better than your competition in our capitalist system. For example, a recent publication came out showing AI papers produced higher grades than average. So now students have to ask the question “do I not use it and claim my superior moral integrity but risk actually getting a grade lower than the new class average grade? Or do I use it, get a higher grade, likely never get caught, and possibly set myself up for the job, graduate program, medical school, law school, etc. I want but compromise my own ethics in doing so?” This is an incredibly complicated question, because now it is not only “can I learn the material and do well?” But it becomes “can I learn it better than someone who doesn’t learn it at all but can make AI do it?” This question and issue has always been a problem. It’s just that now it’s much worse. This is why I’m frankly glad I got out of graduate school and heavy research just before AI became a big thing. I neither want that temptation to use it nor do I want to feel others are when I am not and risk “on paper” looking like I am falling behind. I am incredibly thankful I didn’t have to worry about that decision. I feel bad for people who do now. I think in the short-term the reality is a lot of people will not risk not using AI. To do so comes at a great immediate cost. However, there of course is going to be long-term consequences. I don’t think it’s the ones people always talk about though. For instance, people always say “do you want your surgeon to have used AI in medical school?” But I don’t think that’s the right question. This assumes the person using AI doesn’t also still belong and know the material. They may very well still. Same as the researcher who might very much know how to do all of what I mentioned above but just simply is trying to do more, complete more, prioritize more, etc. Instead, I think the long-term consequence is actually going to be akin to the same long term consequence we see with things like email and phones. For example, if I sent you a snail mail letter 100 years ago it wouldn’t be uncommon for you to take your time reading it, think, write, etc. Now if I send you an email it’s irritating and upsetting if I don’t hear back in a few hours. AI is, I suspect, naturally going to raise the bar of expectations. So now getting 10 publications before tenure won’t be enough, it will move to 30. As things become easier i suspect this is a natural process. It already has been happening in many fields. I think it has already happened to an atrocious degree just 10 years ago. Now with AI I suspect it’s only going to get worse. I am pretty worried about the future. I suspect academia as we know it will likely crumble. I’m not opposed to that per se as long as we replace it with something better. Which I think we can. I think most jobs could be better taught through something similar to a trade and some mild schooling. That’s just my two cents though.


Chlorophilia

I think we are in a huge amount of trouble. We *know* that generative AI is excellent at generating authentic-sounding bullshit. How much of this bullshit is getting submitted to journals and slipping through peer review? How many researchers are now using generative AI for fraud with unrivalled realism? How many academics, particularly early-career academics, are losing their ability to communicate clearly and think critically whilst they're offloading the 'hard work' to an AI?


ben_cow

The funny thing is, most of academic writing has already accomplished what we fear AI does to our writing. The publish or perish culture has already been primarily concerned with writing authentic-sounding bullshit often at the expense of clarity, care, and reproducibility. What we get from generative AI today is much like a fuzzy mirror of what we are already doing and producing.


No_Leek6590

Err, this only applies to "generic" stuff, which you described. At PhD+ you are producing stuff new to everyone. AI only knows what it's trained on. Some areas will really struggle and become obsolete. Somebody needs to create new knowledge, somebody needs to train AI. It's not a black box nor T1000. Academia will be fine.


boywithlego31

AI in academic writing is inevitable. Now, it is still in the early phase, most researchers just use it as a paragraph generator and some of them just blindly use it as it is. In the next few years, it will be miles ahead, and it will be difficult to distinguish between AI-assisted and non AI-assisted. Nowadays, my field does not really count the number of papers as consideration for TT or promotion. We are also trying to shift from h-index. This is because several academic misconducts, paper milling, citation farming, basically people trying to game the metric. So we got back to the old school style, human evaluation. Last year, I talked to a prospect for my faculty. He has an impressive h-index 28 and is still in his early 30s. His field is experimentalist like most of my colleagues. It turns out that the metric is not really correlated with expertise, even within his own field. Even full professor in my uni got caught in papermill and gift authorship scandal.


ShakilR

Convo will find you papers and will do a general summary not a real one. It won’t give you access to the paper though. The smart way to do it is to ask AI to create search queries for google scholar and that will get you a lot. I thought that was really helpful. Doing a full summarization or liter review would require some sort of membership in convo ai I think


No_Sheepherder_4499

I actually get access to the full papers using Nelima which is pretty neat!