I'm extremely disorganized, and I suffer from a lot of information duplication. (I'm working on it.)
In the meantime, however, I need to get my LaTex citations handled! So I'm using bibtool on an .aux file and a set of bib files to produce a bib file just for that paper, with only citations that are required by the paper. Once I get organized, I won't need to do this. Till then:
/home/pscherme/bin/bibtool -s -d -x all_papers.aux review.bib newbib2.bib system2.bib new.bib > all_papers.bib
Thursday, November 17, 2011
Wednesday, October 26, 2011
Insert something from the kill ring after the cursor
Another accidental discovery: Ctrl-u, typed before the typical Ctrl-y, means the thing that is inserted will be after the cursor instead of the default before.
Monday, August 22, 2011
Java pun
It is when I am coding that I am most likely to use the terms "argh" and "rargh". Today while fretting over some command-line argument handling, I find I wrote the following in my notes:
"Arg, what is the usual way of doing this???"
It took me a minute to figure out why I appeared to be addressing the argument...
"Arg, what is the usual way of doing this???"
It took me a minute to figure out why I appeared to be addressing the argument...
Monday, August 1, 2011
My Least Favorite Thing About stackoverflow
...when people comment a question with, "I don't know LaTex, but you shouldn't do what you want to do because of (insert totally subjective opinion, such as 'tables look better without vertical lines anyway' here)".
It just makes me want to mod your comment down "irrelevant"! And it happens all the time with the LaTex questions!
And also, for the 2000th time: yes, I realize tex.stackexchange exists. Yes, I realize that the person who asked the question could have answered it themselves with 2 minutes on google and a toy example to play with. But I learn a lot about things I never even thought about when LaTex novices ask easily-googled questions on stackoverflow, so stop with the silly comments! If you can't answer the question, then don't answer it, okay!
It just makes me want to mod your comment down "irrelevant"! And it happens all the time with the LaTex questions!
And also, for the 2000th time: yes, I realize tex.stackexchange exists. Yes, I realize that the person who asked the question could have answered it themselves with 2 minutes on google and a toy example to play with. But I learn a lot about things I never even thought about when LaTex novices ask easily-googled questions on stackoverflow, so stop with the silly comments! If you can't answer the question, then don't answer it, okay!
Thursday, July 21, 2011
Emacs
The main reason I love emacs is because sometimes, I will inadvertantly hit a strange key combination and -- something unexpected will happen! Today I learned through this method that M-c will capitalize a word. That's a feature I don't need often, but whenever I do -- emacs will be there!
Update: I learned this while preparing to work on creating slides from a paper. A few minutes later, as I was collecting all statements of each main idea from my tex file and assimilating them into one single, complete statement for each main idea, I was about to manually capitalize a word but I remembered in time: Meta-c!
Update: I learned this while preparing to work on creating slides from a paper. A few minutes later, as I was collecting all statements of each main idea from my tex file and assimilating them into one single, complete statement for each main idea, I was about to manually capitalize a word but I remembered in time: Meta-c!
Monday, July 18, 2011
Bayes vs. Markov
I was perhaps unjustifiably surprised as I was going through the Naive Bayes classifier model to find that it looks very similar to something I'm already quite familiar with. Basically, if you start from Bayes' Theorem and go one direction (conditional probability), then make independence assumptions, you end up with the model for the NB classifier. If you go a different direction (chain rule) and then make independency assumptions, you end up with a Markov model. I'm guessing a lot of other models are quite similar too...
Prior, posterior
I don't know a lot about Bayesian statistics, but I'd like to understand a few terms. I often hear "prior" and "posterior" thrown around, and here's my understanding of them after a look at Wikipedia:
It seems the prior (or prior probability) is the measure of uncertainness of an event without taking any evidence (specific features) into account.
Apparently the posterior (or again, posterior probability) is the conditional probability assigned after relevant evidence is taken into account.
So, now to construct an example that illustrates what I currently believe about these concepts: If, in a given corpus, 50% of the tokens are determiners, then the chance of selecting a token you know nothing about it and finding it to be a determiner is 50%. I believe that's the prior. However, if 70% of tokens occurring after verbs are determiners, then the posterior probability is the conditional probability P(determiner|verb) ["probability of a determiner given a verb"], so 70%.
In Bayes Theorem, which is the one part of Bayesian anything that I am one might almost say *too* familiar with, the prior is multiplied by the likelihood function and then normalized to obtain the posterior. So:
or, equivalently:
However, one confusing segment of the Wikipedia entry for a prior is:
"of an uncertain quantity p (for example, suppose p is the proportion of voters who will vote for the politician named Smith in a future election) is the probability distribution that would express one's uncertainty about p before the "data" (for example, an opinion poll) is taken into account."
That seems to suggest that we can't take *any* data into account in order to find it. Don't we then just have to guess? Sounds like more reading may be in order...
It seems the prior (or prior probability) is the measure of uncertainness of an event without taking any evidence (specific features) into account.
Apparently the posterior (or again, posterior probability) is the conditional probability assigned after relevant evidence is taken into account.
So, now to construct an example that illustrates what I currently believe about these concepts: If, in a given corpus, 50% of the tokens are determiners, then the chance of selecting a token you know nothing about it and finding it to be a determiner is 50%. I believe that's the prior. However, if 70% of tokens occurring after verbs are determiners, then the posterior probability is the conditional probability P(determiner|verb) ["probability of a determiner given a verb"], so 70%.
In Bayes Theorem, which is the one part of Bayesian anything that I am one might almost say *too* familiar with, the prior is multiplied by the likelihood function and then normalized to obtain the posterior. So:
or, equivalently:
However, one confusing segment of the Wikipedia entry for a prior is:
"of an uncertain quantity p (for example, suppose p is the proportion of voters who will vote for the politician named Smith in a future election) is the probability distribution that would express one's uncertainty about p before the "data" (for example, an opinion poll) is taken into account."
That seems to suggest that we can't take *any* data into account in order to find it. Don't we then just have to guess? Sounds like more reading may be in order...
Subscribe to:
Posts (Atom)