
Expanding the human proteome with microproteins and peptideins
Approximately 25% of 7200 noncoding open reading frames produce detectable peptides in cells with unknown function.

Approximately 25% of 7200 noncoding open reading frames produce detectable peptides in cells with unknown function.
Recently I have run into Lip-ms, and there are quite a few high impact papers describing the technique. But there are really few papers who have actually used it for doing science.
I think it is the same for several other techniques like proximity-labelling MS, certain advanced variants of thermal proteome profiling etc.
Why do you think it is like this? If these techniques are so great, why aren't they being used for actual science on a much greater scale? Or is my assumption wrong?
Excited to listen to your views.
For perspective, I am a cancer biologist trying to do omics.
Hello everyone,
I have been using Pierce Desalting Spin columns for peptide cleanup, and it required 300 uL of 50% ACN to elute.
Now, I have to use the Pierce C18 columns. Despite both of them being small spin columns, the C18 column protocol says that only 20 uL of 70% ACN should be used for elution.
My question is, isn't 20 uL a very small volume? Why is there such a huge difference in elution volume between the two spin columns (Desalting v C18 one)? Lastly, is there any general understanding in the proteomics community to use a larger elution volume with the Pierce C18 spin columns.
The protocol also says one may use 70% ACN with 0.1% TFA, but no idea why that is not the standard elution solution for this setup.
I understand that most hardcore proteomics labs would probably not be using Pierce C18 spin columns, but this is what I am using as a part-time proteomics guy. I don't have any other C18 setup, and I will send the cleaned-samples to a mass-spec facility.
And advice on the Pierce C18 spin columns is greatly appreciated.
I have been working in proteomics for about 5 years.
Recently, I had a discussion with my supervisor about why proteomics journals usually have relatively low impact factors.
To be honest, I avoided submitting to these journals for years because of that. We usually preferred broader biomedical journals.
But recently I changed my mind a bit. I submitted two papers to Proteomics and one to Journal of Proteome Research.
Now I wonder if impact factor is a bit misleading in this field. Proteomics is very important, but many papers are technical, dataset-based, or useful mainly to a specialized audience.
The same proteomics study may get more attention if it is published as a cancer, immunology, metabolism, or microbiology paper instead of as a proteomics paper.
So I am curious:
Do you think proteomics journals are undervalued?
Do you avoid specialized journals because of impact factor?
For people working in proteomics or other omics fields, how do you choose where to submit?
I am planning to go for the big panels - Somascan 11k or OLink Explore HT (~5.3k) for my CSF samples. Somascan's menu is more accessible and gives detailed QC metrics for all their proteins - and has a good hit rate of my proteins of interest. OLink is a bit more opaque about the performance of sepcific proteins. I am also leaning more towards Somascan as it is larger and offers it as a service and charges per sample (instead of plate).
Apart from logistical advantages - why should anyone choose one over the other. Anyspecific downsides of somascan? Can the aptamer be reliably trusted to be specific? Anyone having any good/bad experince with CSF proteomics using these 2 methods?
Need to make a proper decision as it is quite some money!
I have been reading a lot of recent proteomics papers, especially LC-MS/MS quantitative proteomics studies, and I keep getting the impression that most downstream bioinformatics workflows are basically the same.
A typical pipeline seems to be something like:
preprocessing/filtering of protein or peptide abundance matrizes
normalization, often VSN, median normalization, quantile normalization, etc.
missing value imputation, usually MinProb, random forest, KNN, QRILC, or some variant
differential abundance analysis with limma, MSstats, DEP, proDA, DEqMS, or now newer tools like limpa
volcano plots
heatmaps/PCA
clustering, sometimes Mfuzz
co-expression/module analysis, sometimes WGCNA
ORA/GSEA/pathway enrichment
STRING/Cytoscape protein-protein interaction networks
And then the biological interpretation is usually based on enriched pathways, hub proteins, or interaction networks.
My question is: is there anything genuinely new or methodologically interesting happening in proteomics bioinformatics, especially downstream of protein quantification?
Does anyone use PEAKS software by bioinformatics solutions for Proteomics data analysis? I am new to it and want to understand how you analyze the data and which parameters you mostly change and why?
Hi. I wanted to analyse some publicly available proteomic datasets to validate some of our own proteomic results. I thought of making use of the PRIDE proteomic database. I am having a slight confusion on how to go about it. If anyone has previously done the analyses, starting from download the dataset, processing if needed and analyses using RStudio, or anything like that.... Could someone who is free please help out
Thanks you so much, really means a lot!
Is anyone experiencing issues with TMT11 labelling efficiency? I'm from a lab that has done TMT proteomics for many years and >99% labelling efficiency is standard for us. Over the past few weeks we’ve seen a drop to around 80% efficiency. We've changed everything we can think of on our end and are thinking it might be a manufactuing problem.