Tuesday, March 29, 2011

Shrink Your Databases Regularly

Oh, no, no, no… Not that kind of shrinking! (But I did get your attention, didn’t I?)

shrink \shringk\ n. : (slang) [short for headshrinker] a clinical psychiatrist or psychologist [Tony Soprano sees his shrink, Dr. Melfi, every week.] vt. : (slang) to psychoanalyze [Shrink your databases regularly.] shrank, shrunk


A note to the reader: This is based on several true stories. Some events were changed or otherwise fictionalized for dramatic purposes.



Shrink Your Databases RegularlyDr. Ben Adryl: Good afternoon. I’m Dr. Ben Adryl, but many people just refer to me by my initials DBA. And you are Mr. Tabase, I presume?

Deigh Tabase: Yes, my first name is Deighton. Most people just call me Deigh for short.

Dr. Ben Adryl (DBA): Welcome. Now how can I help you today?

Deigh Tabase (Database): Well, I seem to have a lot of problems, and they’ve been getting worse with each passing day. It was suggested that I make an appointment with you. But I have to admit, I’ve never been to a shrink before.

DBA: I would prefer that you not use the “s” word in this office. I’m not really a traditional psychotherapist anyway… I take more of a holistic approach and work with all aspects of a person’s well-being, like sanity, health, productivity. Think of me as a life coach. I’m kind of like Dr. Phil and Dr. Oz and David Allen all rolled into one.

Database: That sounds great, doc. How do we start?

DBA: Tell me a little bit about some of your problems.

Database: Well, for one thing, I’ve been suffering from claustrophobia as long as I can remember. Plus I feel bloated and tired and overworked and I can’t seem to get organized. I guess I’m kind of a mess.

DBA: Don’t worry, all is not lost. I’m sure I can help. Let me explain how this works. I’m going to ask you several questions, using a special technique to draw more information out of your subconscience. This technique is called Dynamic Management View Querying.

Database: Will it hurt?

DBA: Oh no, not at all. Based on what I find, I may be able to give you recommendations on how you can find relief, and, in some cases, I may be able to administer treatment immediately.

Database: That would be wonderful, doc.

DBA: So tell me more about yourself… Your age, your background, etc…

select Created=convert(varchar(20),d.create_date,100)
,LastRead=isnull(convert(varchar(20),s.last_read_date,100),'')
,LastWrite=isnull(convert(varchar(20),s.last_write_date,100),'')
,CompLevel=d.compatibility_level
,RecovModel=d.recovery_model_desc
,LogReuseWait=d.log_reuse_wait_desc
,Collation=d.collation_name
,AutoStats=case
when d.is_auto_create_stats_on=1
then 'Create '
else ''
end
+case
when d.is_auto_update_stats_on=1
then 'Update '
else ''
end
+case
when d.is_auto_update_stats_async_on=1
then 'Asynch '
else ''
end
,Parameterization=case
when d.is_parameterization_forced=1
then 'Forced'
else 'Simple'
end
,UserAccess=d.user_access_desc
,[State]=d.state_desc
from sys.databases d
outer apply (select last_read_date
=max(case
when last_user_scan>=isnull(last_user_seek,'19000101')
and last_user_scan>=isnull(last_user_lookup,'19000101')
then last_user_scan
when last_user_seek>=isnull(last_user_scan,'19000101')
and last_user_seek>=isnull(last_user_lookup,'19000101')
then last_user_seek
else last_user_lookup
end)
,last_write_date=max(last_user_update)
from sys.dm_db_index_usage_stats
where database_id=d.database_id) s
where database_id=db_id('Deighton')
/*
(Reformatted for clarity):
Created............ Dec 18 2010 8:37PM
LastRead........... Mar 29 2011 11.24AM
LastWrite.......... Mar 29 2011 11:23AM
CompLevel.......... 100
RecovModel......... FULL
LogReuseWait....... LOG_BACKUP
Collation.......... SQL_Latin1_General_CP1_CI_AS
AutoStats.......... Create Update
Parameterization... Simple
UserAccess......... MULTI_USER
State.............. ONLINE
*/
DBA: Hmmm… Interesting.

Database: What is it, doc?

DBA: Oh nothing… Don’t worry… I often mumble to myself as I do some of this analysis. Pay it no mind. Tell me a little more about yourself …

use Deighton
go
select
LogicalName=name
,[Type]=case when [type]=0 then 'Data' else 'Log' end
,SizeAllocated=convert(varchar(20),convert(decimal(12,2),size*8./1024))+'MB'
,SpaceUsed=convert(varchar(20),convert(decimal(12,2),UsedPages*8./1024))+'MB'
,[%Full]=convert(varchar(20),convert(decimal(12,1),UsedPages*100./size))+'%'
,SpaceLeft=convert(varchar(20),convert(decimal(12,2),(size-UsedPages)*8./1024))+'MB'
,[%Avail]=convert(varchar(20),convert(decimal(12,1),(size-UsedPages)*100./size))+'%'
,Growth=case
when is_percent_growth=1
then convert(varchar(20),growth)+'%'
else convert(varchar(20),convert(decimal(12,2),growth*8./1024))+'MB'
end
,MaxSize=case
when max_size=-1
then 'Unlimited'
else convert(varchar(20),convert(decimal(12,0),max_size*8./1024))+'MB'
end
,PhysicalFile=physical_name
from sys.database_files
cross apply (select UsedPages=fileproperty(name,'SpaceUsed')) F_Used
where [type] in (0,1) --Rows,Log
/*
(Reformatted for clarity):
Deighton_Data (Data):
Size: 1756.00MB
Used: 1755.00MB (99.9%)
Avail: 1.00MB (0.1%)
Growth: 1.00MB
Max: Unlimited
File: C:\Microsoft SQL Server\MSSQL.1\MSSQL\data\Deighton_Data.MDF
Deighton_Log (Log):
Size: 14198.81MB
Used: 13745.87MB (96.8%)
Avail: 452.95MB (3.2%)
Growth: 10%
Max: Unlimited
File: C:\Microsoft SQL Server\MSSQL.1\MSSQL\data\Deighton_Log.LDF
*/
DBA: Ahhh… I see… You mentioned earlier that you suffer from claustrophobia.

Database: Yes, all the time. And often I have these anxiety or panic attacks where I kind of explode, and it makes me feel a little better, but not for long.

DBA: Well, I can cure you of that immediately.

Database: You’re kidding… Really?

DBA: Yes. Now just hold still while I…

/*
The database felt claustrophobic because it was using up 99.9% of its allocated space.
And its FILEGROWTH is only 1MB (the default), so every time it reached full capacity and
had an anxiety attack, which was often, it only experienced "autogrowth" of a piddly
little 1MB, which might have made it feel relatively better for a short while, but it
would still be only 1MB away from being full capacity again and having another attack.
If the database's initial allocated size upon creation was 200MB, then it had over 1500
autogrowth anxiety attacks in its lifetime!
So DBA bumped the allocated size up to 3500MB to relieve the claustrophobia and,
at the same time, he changed the FILEGROWTH to 100MB. This was just a preventative
measure, in case the database unexpectedly reached that 3500MB capacity. But DBA will
diligently monitor the used space and adjust sizes appropriately long before the
database uses up that space.
*/
alter database Deighton
modify file (name=N'Deighton_Data'
,size=3500MB
,filegrowth=100MB)
DBA: How does that feel?

Database: Omigosh, doc! That’s amazing! I feel so free now! Yippee!

DBA: I think I can take care of that bloating problem you mentioned earlier also. One moment…

/*
The database's transaction log was HUGE. It was about 14GB in size, filled up
to 96.8% capacity. That's compared to its actual data, which was only 1.7GB.
Since the database has a Recovery Model of FULL, its transaction log will
continue to grow larger and larger forever... unless a transaction log backup
is performed. That log backup will truncate the contents of the transaction
log. Note that a full backup of the database WILL NOT truncate the transaction
log automatically... Log backups must be performed in order to do that.
*/
backup log Deighton
to disk=N'C:\SomeBackupLocation\DeightonLog-yyyymmdd-hhmmss.trn'
DBA: How’s that?

Database: Much better, doc!

DBA: Perhaps at another time I can make a further adjustment, though it is a procedure that I very rarely perform.

Database: Okay.

DBA: Now tell me about diet and exercise. Do you exercise regularly?

Database: Well… I… er…

/*
Obtain the 5 most recent backups of each type performed.
*/
with BackupDetail as
(
select BackupType
,backup_finish_date
,RowNum=row_number() over (partition by BackupType
order by backup_finish_date desc)
from msdb.dbo.backupset
cross apply (select BackupType=case [type]
when 'D' then 'Full'
when 'I' then 'Differential'
when 'L' then 'Log'
when 'F' then 'File'
when 'G' then 'Differential File'
when 'P' then 'Partial'
when 'Q' then 'Differential Partial'
else 'N/A'
end) F_Type
where database_name='Deighton'
)
select BackupType
,backup_finish_date
from BackupDetail
where RowNum<=5
order by BackupType
,RowNum
/*
BackupType backup_finish_date
---------- -----------------------
Full 2011-03-20 23:01:33.000
Full 2011-03-04 22:19:52.000
Full 2011-02-24 22:43:39.000
Full 2011-02-19 22:24:03.000
Full 2011-02-11 23:11:42.000
Log 2011-03-29 11:35:31.000 <==This is the log backup just performed
*/
DBA: Hmmm… It seems pretty sporadic.

Database: Yeah, well…

DBA: Listen, Deighton, this is very important. I know you’re the kind of guy who wants to live life to the FULLest. In order to really do that, you have to do regular exercise and have a healthy diet. Look at it this way… If something happens to you and you get very sick or are in a bad accident, you want to get better quickly and RESTORE yourself to FULL health, don’t you?

Database: Yes.

DBA: You don’t want to be a SIMPLEton and only be restored to a fraction of yourself, do you?

Database: No, I suppose not.

DBA: Then we have to get you into an exercise regimen [full and differential backups] and you should be drinking water regularly during the day [log backups] to help flush out your system. I know it seems like a pain, but it can be really easy and doesn’t take all that much time. To get you motivated, I can put you in touch with a personal trainer named C. Quill Agent who will stay on top of these things for you, making sure that you do them regularly.

Database: Thanks, doc.

DBA: Don’t mention it. Now, on to other things… You mentioned overwork and disorganization?

Database: Yeah. I feel like I’m doing unnecessary work at my place of business. I’m trying to remedy this by using a strategy that I saw in a self-help book, but it doesn’t seem to work. My workflow still seems incredibly inefficient, and I just feel kind of scatter-brained, if you know what I mean.

DBA: Yes, I think I have an idea of what you’re talking about. Tell me about your methods of organization…

use Deighton
go
select
TableName=object_name(dm.object_id)
,IndexName=i.name
,IndexType=dm.index_type_desc
,[%Fragmented]=avg_fragmentation_in_percent
from sys.dm_db_index_physical_stats(db_id(),null,null,null,'sampled') dm
join sys.indexes i on dm.object_id=i.object_id and dm.index_id=i.index_id
order by avg_fragmentation_in_percent desc
/*
(Names of Tables and Indexes Disguised):
TableName IndexName IndexType %Fragmented
-------------------- ----------------------- ------------------ ----------------
qufi_efferhmiqfz rhaqwqamxim NONCLUSTERED INDEX 99.9925228054434
qufi_efferhmiqfz foliqemi NONCLUSTERED INDEX 99.9788779993241
qufiz zuamriwiy NONCLUSTERED INDEX 99.9643747773424
qufi_efferhmiqfz qufioh NONCLUSTERED INDEX 99.9625355911884
qufiz qufihefifomi NONCLUSTERED INDEX 99.9515738498789
qufiz rmiefihhefi NONCLUSTERED INDEX 99.9515503875969
qufiz lezfaphefi NONCLUSTERED INDEX 99.9515503875969
qufiz erfogofy_zfemfhefi NONCLUSTERED INDEX 99.9481058640374
qufiz erfogofy_rumplifihhefi NONCLUSTERED INDEX 99.9481058640374
qufiz miplerihwofhoh NONCLUSTERED INDEX 99.9339498018494
qufiz zuamrioh NONCLUSTERED INDEX 99.9339061467284
qufiz hilifih NONCLUSTERED INDEX 99.9093381686310
qufiz umpheq NONCLUSTERED INDEX 99.9092558983666
fuhu PK_fuhu CLUSTERED INDEX 99.8109640831758
xmuwimz PK_xmuwimz CLUSTERED INDEX 99.7950819672131
xmuwimz hxe NONCLUSTERED INDEX 97.4358974358974
qufiz PK_qufiz CLUSTERED INDEX 97.2856847758728
xmuwimz lezfqemi NONCLUSTERED INDEX 97.2222222222222
hogozouqz PK_hogozouqz CLUSTERED INDEX 97.0588235294118
xmuwimz fomzfqemi NONCLUSTERED INDEX 96.9696969696970
xmuwimz reloriqziqamxim NONCLUSTERED INDEX 96.9696969696970
xmuwimz zzq NONCLUSTERED INDEX 95.8333333333333
xmuwimz mzmoh NONCLUSTERED INDEX 95.4545454545455
xmuwimz fizfmirumh NONCLUSTERED INDEX 95.4545454545455
ruqferfzlezferrizzih PK_ruqferfzlezferrizzih CLUSTERED INDEX 93.7500000000000
hogozouqz hogozouqqemi NONCLUSTERED INDEX 92.8571428571429
xmuwimzlezferrizzih PK_xmuwimzlezferrizzih CLUSTERED INDEX 90.9090909090909
...And so on and so on
*/
DBA: My goodness, I can see why you’re feeling overworked… You are scattered and fragmented. What is the strategy that you said you’re trying to alleviate this?

Database: I wrote it down. Here it is… I try to do it often:

use Deighton
go

/*
Rebuild all Indexes and Statistics in the database
*/
declare @Table nvarchar(255)
,@Sql nvarchar(255)

declare TableCursor cursor
for
select
quotename(table_schema)+'.'+quotename(table_name)
from information_schema.tables
where table_type='BASE TABLE'

open TableCursor
while 1=1
begin
fetch next from TableCursor into @Table
if @@fetch_status<>0 break
set @Sql='alter index all on '+@Table+' rebuild'
exec (@Sql)
set @Sql='update statistics '+@Table
exec (@Sql)
end

close TableCursor
deallocate TableCursor

/*
Reclaim disk space by shrinking
*/
dbcc shrinkdatabase(N'Deighton',10)
dbcc shrinkfile(N'Deighton_Data',10)
dbcc shrinkfile(N'Deighton_Log',10)
DBA: Oh my goodness, I definitely see the problem here. And I’ll bet you feel exhausted after doing this, am I right?

Database: Yeah, you got that right.

DBA: Hold on… Let me write down a few notes…

/*
The following will rebuild all indexes from scratch, even though that might not
be necessary. Indexes should be treated on a case-by-case bses. An index should
be REBUILT only when it has high fragmentation, REORGANIZEd if the fragmentation is
moderate, and not touched at all if the fragmentation is very low.
*/
set @Sql='alter index all on '+@Table+' rebuild'
exec (@Sql)
/*
By default, statistics are auto-updated by the system when a table has been
changed by a certain amount. If a table has not changed at all, then there's no
reason to update its statistics. What's worse with the command below is that
it is NOT updating the statistics WITH FULLSCAN. The Index Rebuild above
automatically rebuilt statistics for indexed columns "for free" (using FULLSCAN),
but the command below will just end up re-updating them based on only a
SAMPLE, so it's duplicating the work of the Index Rebuild and creating less
accurate statistics at the same time! Ideally the command below should be doing
UPDATE STATISTICS ... WITH FULLSCAN, COLUMNS so that it only updates non-indexed
column statistics.
*/
set @Sql='update statistics '+@Table
exec (@Sql)

/*
Oh, horror of horrors! SHRINKing a database will just end up completely fragmenting
up all the indexes that had been painstakingly rebuilt above! This is like shooting
yourself in the foot. And it's essentially being done twice! A SHRINKDATABASE
automatically shrinks each of its data and log files, so the two SHRINKFILE commands
are just duplicating the process.
*/
dbcc shrinkdatabase(N'Deighton',10)
dbcc shrinkfile(N'Deighton_Data',10)
dbcc shrinkfile(N'Deighton_Log',10)
DBA: Okay, listen to me carefully. You must stop doing this immediately and never do it again! The first part is generally okay, except you’re spending way too much time and effort in redoing EVERYTHING from scratch. Contact a colleague of mine named Ola Hallengren… he has a much more intelligent approach to doing all of this. But the second part is a huge giant no-no, because it’s destroying everything you painstakingly did in the first part. It’s like you’ve arranged your papers on your desk in nice neat stacks and then a hurricane comes into the room and blows them all over the place. After our session, please read this article by Paul Randal on why you should not be doing this.

Database: Okay, whatever you say, doc.

DBA: It’s also possible that some of the stuff you’re organizing may be doing you more harm than good…

/*
Produce a list of all nonclustered, non-primary-key indexes in the database that
deal with more than 5000 rows. Compare their Reads vs Writes. The list is sorted
by Read/Write ratio. Focus on the indexes toward the top of the list with a
Read/Write ratio of under 1.00... They are candidates for DROPping from the database.
*/
use Deighton
go
select
TableName=o.Name
,IndexName=i.Name
,Rows
,Reads
,Writes
,[Reads/Write]
,Seeks=User_Seeks
,Scans=User_Scans
,Lookups=User_Lookups
,Definition='('+IndexColumns+')'+coalesce(' include ('+IncludeColumns+')','')
,DropIt='drop index '+quotename(i.Name)
+' on '+quotename(c.name)+'.'+quotename(object_name(s.object_id))
from sys.dm_db_index_usage_stats s
join sys.indexes i ON s.object_id=i.object_id and s.index_id=i.index_id
join sys.objects o on s.object_id=o.object_id
join sys.schemas c on o.schema_id=c.schema_id
cross apply (select Rows=sum(p.Rows)
from sys.partitions p
where object_id=s.object_id and index_id=s.index_id) F_Rows
cross apply (select Reads=User_Seeks+User_Scans+User_Lookups
,Writes=User_Updates) F_RW
cross apply (select [Reads/Write]=cast(case
when Writes<1
then 100
else 1.*Reads/Writes
end as decimal(12,3))) F_RW2
cross apply (select IndexColumns
=stuff(
(select ','+c.Name
+case ic.Is_Descending_Key
when 1
then ' DESC'
else ''
end
from sys.index_columns ic
join sys.columns c on ic.Object_ID=c.Object_ID
and ic.Column_ID=c.Column_ID
where ic.Object_ID=i.Object_ID
and ic.Index_ID=i.Index_ID
and ic.Is_Included_Column=0
order by ic.Index_Column_ID
for xml path(''))
,1,1,'')
,IncludeColumns
=stuff(
(select ','+c.Name
from sys.index_columns ic
join sys.columns c on ic.Object_ID=c.Object_ID
and ic.Column_ID=c.Column_ID
where ic.Object_ID=i.Object_ID
and ic.Index_ID=i.Index_ID
and ic.Is_Included_Column=1
order by ic.Index_Column_ID
for xml path(''))
,1,1,'')) F_IC
where s.database_id=db_id()
and objectproperty(s.object_id,'IsUserTable')=1
and i.type_desc='NONCLUSTERED'
and i.is_primary_key=0
and i.is_unique_constraint=0
and Rows>5000
order by [Reads/Write]
,Reads
,Writes
/*
(Names of Tables and Indexes Disguised):
(Output abbreviated):
TableName IndexName Reads Writes Reads/Write
---------------------- ---------------------------------------- ----- ------ -----------
eahoffmeol eahoffmoel_rheqgilugmezfimoh 0 1346 0.000
ehhmizziz ehhmizziz_ehhmizzfypi 0 1880 0.000
rheqgilug rheqgilug_gluxelazimoh 0 42235 0.000
zaxlihgim zaxlihgim_gloh 0 62264 0.000
rheqgilug_hifeolih rheqgilug_hifeolih_rheqgilug_mezfimoh 1 37584 0.000
zaxlihgim zaxlihgim_miruqrolexli_ehjazfmiqfoh 82 62292 0.001
zaxlihgim zaxlihgim_rezhmiriopf_hifeolzoh 113 62323 0.002
zaxlihgim zaxlihgim_oqguorioh 323 62641 0.005
zaxlihgim zaxlihgim_hozxamzexliofimoh 732 62996 0.012
phuqiz phuqiz_phuqi 75 2662 0.028
eahoffmeol eahoffmoel_zuamriwiy_zuamrioh 44 1346 0.033
rheqgilug_mezfim rheqgilugmezfim_zizzouqoh 491 6759 0.073
oqguori_xollexliofimz oqguori_xollexliofimz_pulory_pleq_oh 1907 5440 0.351
miruqrolexli_loqiofimz miruqrolexli_loqiofimz_fypi 13723 19833 0.692
oqguori_xollexliofimz oqguori_xollexliofimz_oqguorioh 11240 5440 2.066
oqguori_xollexliofimz oqguori_xollexliofimz_zuamriwiy_zuamrioh 11465 5117 2.241
phuqiz phuqiz_zuamriwiy_zuamrioh 8303 3476 2.389
miruqrolexli_loqiofimz fw_miruqrolexli_loqiofimz_fypi 75078 24978 3.006
rheqgilug_mezfim rheqgilugmezfim_zuamriwiy_zuamrioh 38364 6759 5.676
ehhmizziz ehhmizziz_zuamriwiy_zuamrioh 21305 2959 7.200
zopruhiz rofy 2572 0 100.000
zopruhiz zfefi 2572 0 100.000
zopruhiz zopruhi 27203 0 100.000
*/
DBA: See the items at the top? Those are cases where you’re spending much more time filing items away than you are acquiring them later. For example, in the fourth entry in the list, you’ve spent time and energy filing 62,264 items but you’ve never once accessed one of them after the fact. Organizing your information in that way is most likely more work than it’s worth, since you aren’t really taking advantage of it anyway.

Database: Hmmm… I see what you mean, doc.

DBA: Well, it looks like we’re out of time for today. Even though we started to accomplish a lot today, we’ve just scratched the surface, so I’d like to schedule you for regular visits so that we can monitor you and administer some more treatments as necessary.

Database: Er… How long will we be doing this monitoring, doc?

DBA: Why, the rest of your life of course. Here’s my bill.

Database: Let’s see… WHAT?! That’s outrageous! I don’t have that kind of money! And you want to continue seeing me and billing me ridiculous amounts like this for the rest of my life? Are you insane? Just who the hell do you think you are? You high-and-mighty types make me want to vomit. You sit there on your high horse and think you know everything. Well, you know what you can do with this here bill? You can take it and just shove---

drop database Deighton
DBA: (Sigh). Another one bites the dust. Oh well… Ms. Foreachdb, would you please send in my next patient?

10 comments:

  1. You and Michael Swart should team up to do a comic strip :)

    ReplyDelete
  2. @Claire:

    Thanks! Michael is one of my favorites!

    --Brad

    ReplyDelete
  3. The ending is so *violent*. I may need therapy.

    ReplyDelete
  4. @Kendra:

    8^)

    I'd had this entire concept swimming in my head for weeks, but I couldn't think of how to end it... Then my eyes stumbled on a DROP command a couple days ago, and I knew I had something... Something twisted, that is.

    --Brad

    ReplyDelete
  5. About the seemingly unused indexes. I'm always worried that once I drop them, the small handful of queries that do use them, will start timing out. It's like it's almost better to pay the manageable maintenance cost and space as insurance. What are your thoughts? Have you seen times when dropping a seemingly unused index caused headaches?

    I'm going to start giving my database tables names like "eahoffmeol" and "zaxlihgim".

    But geez, I like the attention to detail. The fact that you're using "Shrink" as a transitive verb.

    ReplyDelete
  6. And I'm with Kendra, The DROP DATABASE Deighton did seem kind of abrupt. It's surprising how attached one can get to a fictional anthropomorphized database in the space of one column.

    ReplyDelete
  7. @Michael:

    I originally got the idea for the "seemingly unused" indexes from a video I saw by Brent Ozar, though I have seen it in other articles and blogs. I believe Brent said he would drop them in a New York minute (correct me if I'm wrong Brent)... and that's the source of the "DropIt" column in the query... but I don't fell altogether comfortable doing it blindly... only if the numbers are grossly skewed, and I try to research it a bit if I can. If, over a long period of time, I don't see ANY reads/writes and TONS of writes, that index seems like a big time-waster.

    However...

    If you read one of Rob Farley's chapters in MVP Deep Dives, he gives an example of an index that "is not used", which in reality might be true as far as seeks/scans, but just its existence is important to creating good plans.

    Actually, I was upset to find out that "shrink" is NOT a transitive verb... I came up with the title first, and then found that out later. So I just made the dictionary entry up. 8^)

    --Brad

    ReplyDelete
  8. Wow, this is super-helpful! Thanks for posting!

    ReplyDelete
  9. Amazing .. sarcastic and funney :-D

    ReplyDelete