perl - optimal way to process many similarly named text files -


i have several thousand text files in directory need process. named, variation:

/home/dir/abc123.name.efg-joe_p000.20110124.csv /home/dir/abc456.name.efg-jon_p000.20110124.csv /home/dir/abc789.name.efg-bob_p000.20110124.csv 

i have perl script can process 1 file @ time without problem:

./script.pl /home/dir/abc123.name.efg-joe_p000.20110124.csv 

what's best way pass in , process many of these files, 1 time? looking @ argv this? should list files in separate file , use input?

if "optimal" mean "no code changes," , are, pathnames suggest, on *nix-like system, try this:

$ find /home/dir -type f -name \*.csv -exec ./script.pl {} \; 

if script.pl can handle multiple filename arguments, might parallelize, say, 10 @ time:

$ find /home/dir -type f -name \*.csv | xargs -n 10 ./script.pl 

Comments

Popular posts from this blog

apache - Add omitted ? to URLs -

redirect - bbPress Forum - rewrite to wwww.mysite prohibits login -

php - How can I stop spam on my custom forum/blog? -