All screenplays on the simplyscripts.com and simplyscripts.net domain are copyrighted to their respective authors. All rights reserved. This screenplaymay not be used or reproduced for any purpose including educational purposes without the expressed written permission of the author.
Great news on job, Mark! That's become a rare thing these days. So glad you were able to find something. I hope it's a job you like!! Lost mine in March and can't find anything that paid close to what I was making. Crazy times.
Great news on job, Mark! That's become a rare thing these days. So glad you were able to find something. I hope it's a job you like!! Lost mine in March and can't find anything that paid close to what I was making. Crazy times.
Indeed. It's about a 50% pay cut I've had to accept. Better than nothing and maybe in a year or two I can crawl myself back to what I was once earning.
For more of my scripts, stories, produced movies and the ocassional blog, check out my new website. CLICK
Indeed. It's about a 50% pay cut I've had to accept. Better than nothing and maybe in a year or two I can crawl myself back to what I was once earning.
ouch, but well done for taking the job and accepting the situation
now you just have to work and write like the rest of us
The Elevator Most Belonging To Alice - Semi Final Bluecat, Runner Up Nashville Inner Journey - Page Awards Finalist - Bluecat semi final Grieving Spell - winner - London Film Awards. Third - Honolulu Ultimate Weapon - Fresh Voices - second place IMDb link... http://www.imdb.com/name/nm7062725/?ref_=tt_ov_wr
I'm struggling with the grading this round... there are scripts that have clearly made no attempt to address the theme, and where the genre is also suspect.
So with a limited 1-5 system which is what we have here... a 2.6 or 3.6 is rounded up to a 3 or 4. But then a 2.4 and a 3.4 is a 2 and a 3....
AJR
yup, it's a tricky one.
The scoring system is not a bad one, but as you say sometimes hard to separate out. Especially as you could have good script, but weak connection to theme/parameters - if we don't adjust, whats the point in the first place?
I haven't given a 1 because the script are all fair efforts
I have rarely given a 5 as I reserve this for a 'blow me away' type script
so that leaves 2-4 to differentiate. But averaged out across us all it seems to work out
I did wonder whether we should be given our average review figures to see who's mean, and who's just plain lovely
The Elevator Most Belonging To Alice - Semi Final Bluecat, Runner Up Nashville Inner Journey - Page Awards Finalist - Bluecat semi final Grieving Spell - winner - London Film Awards. Third - Honolulu Ultimate Weapon - Fresh Voices - second place IMDb link... http://www.imdb.com/name/nm7062725/?ref_=tt_ov_wr
I haven't given a 1 because the script are all fair efforts
I have rarely given a 5 as I reserve this for a 'blow me away' type script
so that leaves 2-4 to differentiate. But averaged out across us all it seems to work out
I did wonder whether we should be given our average review figures to see who's mean, and who's just plain lovely
would that change some scores? don't know.
If I remember my statistics education, it shouldn't matter if someone votes 1-4 or 2-5 (either thinking no one is excellent or no one is crap). Where scores can muddy the system is if everyone is average. I've rarely scored an entry a 5, but I have dished out a 1 a couple times through the years (sometimes for my own, if I could). So mean/lovely is relative to the curve that they vote on. If they voted 15 entries as poor and 2 as very good, that can screw the scoring as well. I usually think of just how I would order them from first to last based on how they fit the parameters and how well crafted they are. A well crafted entry that is light on theme is better than an entry that ticks all the boxes but is riddled with mistakes.
If I remember my statistics education, it shouldn't matter if someone votes 1-4 or 2-5 (either thinking no one is excellent or no one is crap). Where scores can muddy the system is if everyone is average. I've rarely scored an entry a 5, but I have dished out a 1 a couple times through the years (sometimes for my own, if I could). So mean/lovely is relative to the curve that they vote on. If they voted 15 entries as poor and 2 as very good, that can screw the scoring as well. I usually think of just how I would order them from first to last based on how they fit the parameters and how well crafted they are. A well crafted entry that is light on theme is better than an entry that ticks all the boxes but is riddled with mistakes.
But then again, maybe I'm just blowing smoke.
There are three ways this gets handled in serious surveys.
First, yuge sample size. Law of large numbers ensures the effect you mentioned above.
Second, inter-rater reliability. At the opposite end of the scale, you can measure that different raters' scores of overlapping targets are sufficiently correlated that the ratings mean something. This is usually combined with attention checks or manipulation checks (where the answer is already known by the surveyor, to make sure people are taking the survey seriously.)
Third, rater fixed effects. That is, you take a particular rater's average score and subtract it from each of his/her scores. All of your scores will be centered on zero, but comparable to one another. You can go all-out with z-scores (subtract the average then divide by the standard deviation) to normalize if one rater has higher variability than another.
It'd be a fun exercise to see the margin of error for the ratings here (standard deviation of ratings devided by the square root of number-of-ratings times a scaling factor*), but all it would really show is that the top few entries are really close to one another. We already know that.
* Scaling factor comes from looking up "number-of-raters minus one" on the first column of this table and using the third column, which will be something in the neighborhood of 2.15.
I mainly score based on obvious effort. To fit in 5 variables in 5 pages and come up with a decent story is beyond challenging. Some weeks you have easier variables than others, and sometimes you have to work with variables that are just completely insane. If I see someone tried to fit all 5 in, even if they were light on a couple or even "kind of" shoehorned...I give them credit for trying. If there is no attempt at theme...they get a 1. Zero sign at an attempt of the assigned genre...1. If they just tell us someone has a certain job but it's not shown in their actions...1. Object...personally, I dont care much. As long as it's doing something and not just a prop, it's there. Location doesn't seem to be much of an issue. I think the majority of scripts so far have made good use of that.
IMO, people are going to vote on their personal take on this challenge. Some review and score according to their prefence on fave genres. Some even review and score based on their own made up rules of what's acceptable and what's not. So based on all that, I'm not sure showing averages would enlighten us to anything.
Sean probably knows the history of us and how stingy or lovely each of us is.
I have no pattern, but usually there is at least one excellent from me. This time there were two.
Actually this time there were a variety of grades when other times (not often) there's many goods and just a couple deviations.
There are so many factors to care for in this comp. So I decided that genre and theme are more important than anything else. I don't see many that are not on the theme btw or there's no attempt to touch the theme. And genre better be somewhat close to the labeled.
I score according to my view of how the script has tried to meet the Challenge, i.e. use the criteria, meet the theme in stated genre and fit them into a coherent story.
But I think maybe we might be taking it a little too seriously if we get into debates on statistics
I wasn't really concerned about the statistics, I was just pointing out that if in our minds the script is a 3.4, then the script is still a 3, as is a 2.6 script. So the quality can vary widely, by 8/10 of a point, and the scripts will receive the same grade.
I wasn't really concerned about the statistics, I was just pointing out that if in our minds the script is a 3.4, then the script is still a 3, as is a 2.6 script. So the quality can vary widely, by 8/10 of a point, and the scripts will receive the same grade.